publications
2024
- Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual AbuseMiranda Wei, Sunny Consolvo, Patrick Gage Kelley, Tadayoshi Kohno, Tara Matthews, Sarah Meiklejohn, Franziska Roesner, Renee Shelby, Kurt Thomas, and Rebecca UmbachIn USENIX Security Symposium, Aug 2024
Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to people’s digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leaking their images, or purposefully leaking images to enact revenge or exert control. In this paper, we explore how people seek and receive help for IBSA on social media. Specifically, we identify over 100,000 Reddit posts that engage relationship and advice communities for help related to IBSA. We draw on a stratified sample of 261 posts to qualitatively examine how various types of IBSA unfold, including the mapping of gender, relationship dynamics, and technology involvement to different types of IBSA. We also explore the support needs of victim-survivors experiencing IBSA and how communities help victim-survivors navigate their abuse through technical, emotional, and relationship advice. Finally, we highlight sociotechnical gaps in connecting victim-survivors with important care, regardless of whom they turn to for help.
- SoK (or SoLK?): On the Quantitative Study of Sociodemographic Factors and Computer Security BehaviorsMiranda Wei, Jaron Mink, Yael Eiger, Tadayoshi Kohno, Elissa M. Redmiles, and Franziska RoesnerIn USENIX Security Symposium, Aug 2024
Researchers are increasingly exploring how gender, culture, and other sociodemographic factors correlate with user computer security and privacy behaviors. To more holistically understand relationships between these factors and behaviors, we make two contributions. First, we broadly survey existing scholarship on sociodemographics and secure behavior (151 papers) before conducting a focused literature review of 47 papers to synthesize what is currently known and identify open questions for future research. Second, by incorporating contemporary social and critical theories, we establish guidelines for future studies of sociodemographic factors and security behaviors that address how to overcome common pitfalls. We present a case study to demonstrate our guidelines in action, at-scale, that conduct a measurement study of the relationships between sociodemographics and de-identified, aggregated log data of security and privacy behaviors among 16,829 users on Facebook across 16 countries. Through these contributions, we position our work as a systemization of a lack of knowledge (SoLK). Overall, we find contradictory results and vast unknowns about how identity shapes security behavior. Through our guidelines and discussion, we chart new directions to more deeply examine how and why sociodemographic factors affect security behaviors.
- “Violation of My Body:” Perceptions of AI-generated Non-Consensual (Intimate) ImageryGrace Brigham, Miranda Wei, Tadayoshi Kohno, and Elissa M. RedmilesIn Symposium on Usable Privacy and Security (SOUPS), Aug 2024
AI technology has enabled the creation of deepfakes: hyperrealistic synthetic media. We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them, including deepfakes portraying sexual acts. Respondents indicated strong opposition to creating and, even more so, sharing non-consensually created synthetic content, especially if that content depicts a sexual act. However, seeking out such content appeared more acceptable to some respondents. Attitudes around acceptability varied further based on the hypothetical creator’s relationship to the participant, the respondent’s gender and their attitudes towards sexual consent. This study provides initial insight into public perspectives of a growing threat and highlights the need for further research to inform social norms as well as ongoing policy conversations and technical developments in generative AI.
- Sharenting on TikTok: Exploring Parental Sharing Behaviors and the Discourse Around Children’s Online PrivacySophie Stephenson, Christopher Nathaniel Page, Miranda Wei, Apu Kapadia, and Franziska RoesnerIn Proceedings of the CHI Conference on Human Factors in Computing Systems, May 2024
Since the inception of social media, parents have been sharing information about their children online. Unfortunately, this “sharenting” can expose children to several online and offline risks. Although researchers have studied sharenting on multiple platforms, sharenting on short-form video platforms like TikTok—where posts can contain detailed information, spread quickly, and spark considerable engagement—is understudied. Thus, we provide a targeted exploration of sharenting on TikTok. We analyzed 328 TikTok videos that demonstrate sharenting and 438 videos where TikTok creators discuss sharenting norms. Our results indicate that sharenting on TikTok indeed creates several risks for children, not only within individual posts but also in broader patterns of sharenting that arise when parents repeatedly use children to generate viral content. At the same time, creators voiced sharenting concerns and boundaries that reflect what has been observed on other platforms, indicating the presence of cross-platform norms. Promisingly, we observed that TikTok users are engaging in thoughtful conversations around sharenting and beginning to shift norms toward safer sharenting. We offer concrete suggestions for designers and platforms based on our findings.
- It’s Trying Too Hard To Look Real: Deepfake Moderation Mistakes and Identity-Based BiasJaron Mink, Miranda Wei, Collins W. Munyendo, Kurt Hugenberg, Tadayoshi Kohno, Elissa M. Redmiles, and Gang WangIn Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, May 2024
Online platforms employ manual human moderation to distinguish human-created social media profiles from deepfake-generated ones. Biased misclassification of real profiles as artificial can harm general users as well as specific identity groups; however, no work has yet systematically investigated such mistakes and biases. We conducted a user study (n=695) that investigates how 1) the identity of the profile, 2) whether the moderator shares that identity, and 3) components of a profile shown affect the perceived artificiality of the profile. We find statistically significant biases in people’s moderation of LinkedIn profiles based on all three factors. Further, upon examining how moderators make decisions, we find they rely on mental models of AI and attackers, as well as typicality expectations (how they think the world works). The latter includes reliance on race/gender stereotypes. Based on our findings, we synthesize recommendations for the design of moderation interfaces, moderation teams, and security training.
2023
- A Two-Decade Retrospective Analysis of a University’s Vulnerability to Attacks Exploiting Reused PasswordsAlexandra Nisenoff, Maximilian Golla, Miranda Wei, Juliette Hainline, Hayley Szymanek, Annika Braun, Annika Hildebrandt, Blair Christensen, David Langenberg, and Blase UrIn 32nd USENIX Security Symposium (USENIX Security 23), May 2023
Credential-guessing attacks often exploit passwords that were reused across a user’s online accounts. To learn how organizations can better protect users, we retrospectively analyzed our university’s vulnerability to credential-guessing attacks across twenty years. Given a list of university usernames, we searched for matches in both data breaches from hundreds of websites and a dozen large compilations of breaches. After cracking hashed passwords and tweaking guesses, we successfully guessed passwords for 32.0% of accounts matched to a university email address in a data breach, as well as 6.5% of accounts where the username (but not necessarily the domain) matched. Many of these accounts remained vulnerable for years after the breached data was leaked, and passwords found verbatim in breaches were nearly four times as likely to have been exploited (i.e., suspicious account activity was observed) than tweaked guesses. Over 70 different data breaches and various username-matching strategies bootstrapped correct guesses. In surveys of 40 users whose passwords we guessed, many users were unaware of the risks to their university account or that their credentials had been breached. This analysis of password reuse at our university provides pragmatic advice for organizations to protect accounts.
- Skilled or Gullible? Gender Stereotypes Related to Computer Security and PrivacyMiranda Wei, Pardis Emami-Naeini, Franziska Roesner, and Tadayoshi KohnoIn IEEE Security & Privacy, May 2023
Gender stereotypes remain common in U.S. society and harm people of all genders. Focusing on binary genders (women and men) as a first investigation, we empirically study gender stereotypes related to computer security and privacy. We used Prolific to conduct two surveys with U.S. participants that aimed to: (1) surface potential gender stereotypes related to security and privacy (N = 202), and (2) assess belief in gender stereotypes about security and privacy engagement, personal characteristics, and behaviors (N = 190). We find that stereotype beliefs are significantly correlated with participants’ gender as well as level of sexism, and we delve into the justifications our participants offered for their beliefs. Beyond scientifically studying the existence and prevalence of such stereotypes, we describe potential implications, including biasing crowdworker-faciliated user research. Further, our work lays a foundation for deeper investigations of the impacts of stereotypes in computer security and privacy, as well as stereotypes across the whole gender and identity spectrum.
- “There’s so Much Responsibility on Users Right Now:” Expert Advice for Staying Safer From Hate and HarassmentMiranda Wei, Sunny Consolvo, Patrick Gage Kelley, Tadayoshi Kohno, Franziska Roesner, and Kurt ThomasIn Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Apr 2023chi23-supplemental.pdf
Online hate and harassment poses a threat to the digital safety of people globally. In light of this risk, there is a need to equip as many people as possible with advice to stay safer online. We interviewed 24 experts to understand what threats and advice internet users should prioritize to prevent or mitigate harm. As part of this, we asked experts to evaluate 45 pieces of existing hate-and-harassmentspecifc digital-safety advice to understand why they felt advice was viable or not. We fnd that experts frequently had competing perspectives for which threats and advice they would prioritize. We synthesize sources of disagreement, while also highlighting the primary threats and advice where experts concurred. Our results inform immediate eforts to protect users from online hate and harassment, as well as more expansive socio-technical eforts to establish enduring safety.
2022
- Anti-Privacy and Anti-Security Advice on TikTok: Case Studies of Technology-Enabled Surveillance and Control in Intimate Partner and Parent-Child RelationshipsMiranda Wei, Eric Zeng, Tadayoshi Kohno, and Franziska RoesnerIn Symposium on Usable Privacy and Security, Apr 2022
Modern technologies including smartphones, AirTags, and tracking apps enable surveillance and control in interpersonal relationships. In this work, we study videos posted on TikTok that give advice for how to surveil or control others through technology, focusing on two interpersonal contexts: intimate partner relationships and parent-child relationships. We collected 98 videos across both contexts and investigate (a) what types of surveillance or control techniques the videos describe, (b) what assets are being targeted, (c) the reasons that TikTok creators give for using these techniques, and (d) defensive techniques discussed. Additionally, we make observations about how social factors – including social acceptability, gender, and TikTok culture – are critical context for the existence of this anti-privacy and anti-security advice. We discuss the use of TikTok as a rich source of qualitative data for future studies and make recommendations for technology designers around interpersonal surveillance and control.
- Styx++: Reliable Data Access and Availability Using a Hybrid Paxos and Chain Replication ProtocolAther Sharif, Emilia F. Gan, and Miranda WeiIn Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, Apr 2022
HCI research often involves accessing and storing information in databases. However, in case of a database node failure, researchers could experience significant work delays, monetary costs, and data loss. How can researchers who have little or no knowledge of systems and infrastructures ensure that their data collection source is reliable and maximally available for accessing and storing data? To answer this question, we surveyed 11 HCI researchers. Using the findings from the survey, we developed Styx++—an easy-to-integrate open-source solution that bundles together existing tools and concepts, providing HCI researchers with a reliable distributed system for their database needs. Styx++ is a hybrid solution involving both the Paxos and Chain Replication Protocol, providing strong consistency and high availability to minimize the risks of single-point failures in a traditional database system setup. Our evaluation of Styx++ against benchmark solutions shows promising results of an increase in reliability without substantial performance degradation.
2021
- Polls, Clickbait, and Commemorative $2 Bills: Problematic Political Advertising on News and Media Websites around the 2020 U.S. ElectionsEric Zeng, Miranda Wei, Theo Gregersen, Tadayoshi Kohno, and Franziska RoesnerIn Proceedings of the 21st ACM Internet Measurement Conference, Nov 2021
Online advertising can be used to mislead, deceive, and manipulate Internet users, and political advertising is no exception. In this paper, we present a measurement study of online advertising around the 2020 United States elections, with a focus on identifying dark patterns and other potentially problematic content in political advertising. We scraped ad content on 745 news and media websites from six geographic locations in the U.S. from September 2020 to January 2021, collecting 1.4 million ads. We perform a systematic qualitative analysis of political content in these ads, as well as a quantitative analysis of the distribution of political ads on different types of websites. Our findings reveal the widespread use of problematic tactics in political ads, such as bait-and-switch ads formatted as opinion polls to entice users to click, the use of political controversy by content farms for clickbait, and the more frequent occurrence of political ads on highly partisan news websites. We make policy recommendations for online political advertising, including greater scrutiny of non-official political ads and comprehensive standards across advertising platforms.
- On the Limited Impact of Visualizing Encryption: Perceptions of E2E Messaging SecurityChristian Stransky, Dominik Wermke, Johanna Schrader, Nicolas Huaman, Yasemin Acar, Anna Lena Fehlhaber, Miranda Wei, Blase Ur, and Sascha FahlIn Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021), Nov 2021
Communication tools with end-to-end (E2E) encryption help users maintain their privacy. Although messengers like WhatsApp and Signal bring E2E encryption to a broad audience, past work has documented misconceptions of their security and privacy properties. Through a series of five online studies with 683 total participants, we investigated whether making an app’s E2E encryption more visible improves perceptions of trust, security, and privacy. We first investigated why participants use particular messaging tools, validating a prior finding that many users mistakenly think SMS and e-mail are more secure than E2E-encrypted messengers. We then studied the effect of making E2E encryption more visible in a messaging app. We compared six different text disclosures, three different icons, and three different animations of the encryption process. We found that simple text disclosures that messages are "encrypted" are sufficient. Surprisingly, the icons negatively impacted perceptions. While qualitative responses to the animations showed they successfully conveyed and emphasized "security" and "encryption," the animations did not significantly impact participants’ quantitative perceptions of the overall trustworthiness, security, and privacy of E2E-encrypted messaging. We confirmed and unpacked this result through a validation study, finding that user perceptions depend more on preconceived expectations and an app’s reputation than visualizations of security mechanisms.
2020
- What Twitter Knows: Characterizing Ad Targeting Practices, User Perceptions, and Ad Explanations Through Users’ Own Twitter DataMiranda Wei, Madison Stamos, Sophie Veys, Nathan Reitinger, Justin Goodman, Margot Herman, Dorota Filipczuk, Ben Weinshel, Michelle L. Mazurek, and Blase UrIn 29th USENIX Security Symposium (USENIX Security 20), Nov 2020
Although targeted advertising has drawn significant attention from privacy researchers, many critical empirical questions remain. In particular, only a few of the dozens of targeting mechanisms used by major advertising platforms are well understood, and studies examining users’ perceptions of ad targeting often rely on hypothetical situations. Further, it is unclear how well existing transparency mechanisms, from data-access rights to ad explanations, actually serve the users they are intended for. To develop a deeper understanding of the current targeting advertising ecosystem, this paper engages 231 participants’ own Twitter data, containing ads they were shown and the associated targeting criteria, for measurement and user study. We find many targeting mechanisms ignored by prior work — including advertiser-uploaded lists of specific users, lookalike audiences, and retargeting campaigns — are widely used on Twitter. Crucially, participants found these understudied practices among the most privacy invasive. Participants also found ad explanations designed for this study more useful, more comprehensible, and overall more preferable than Twitter’s current ad explanations. Our findings underscore the benefits of data access, characterize unstudied facets of targeted advertising, and identify potential directions for improving transparency in targeted advertising.
- Taking Data Out of Context to Hyper-Personalize Ads: Crowdworkers’ Privacy Perceptions and Decisions to Disclose Private InformationJulia Hanson*, Miranda Wei*, Sophie Veys, Matthew Kugler, Lior Strahilevitz, and Blase UrIn Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Apr 2020
Data brokers and advertisers increasingly collect data in one context and use it in another. When users encounter a misuse of their data, do they subsequently disclose less information? We report on human-subjects experiments with 25 in-person and 280 online participants. First, participants provided personal information amidst distractor questions. A week later, while participants completed another survey, they received either a robotext or online banner ad seemingly unrelated to the study. Half of the participants received an ad containing their name, partner’s name, preferred cuisine, and location; others received a generic ad. We measured how many of 43 potentially invasive questions participants subsequently chose to answer. Participants reacted negatively to the personalized ad, yet answered nearly all invasive questions accurately. We unpack our results relative to the privacy paradox, contextual integrity, and power dynamics in crowdworker platforms.
2019
- Oh, the Places You’ve Been! User Reactions to Longitudinal Transparency About Third-Party Web Tracking and InferencingBen Weinshel, Miranda Wei, Mainack Mondal, Euirim Choi, Shawn Shan, Claire Dolin, Michelle L. Mazurek, and Blase UrIn Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, Nov 2019
Internet companies track users’ online activity to make inferences about their interests, which are then used to target ads and personalize their web experience. Prior work has shown that existing privacy-protective tools give users only a limited understanding and incomplete picture of online tracking. We present Tracking Transparency, a privacy-preserving browser extension that visualizes examples of long-term, longitudinal information that third-party trackers could have inferred from users’ browsing. The extension uses a client-side topic modeling algorithm to categorize pages that users visit and combines this with data about the web trackers encountered over time to create these visualizations. We conduct a longitudinal field study in which 425 participants use one of six variants of our extension for a week. We find that, after using the extension, participants have more accurate perceptions of the extent of tracking and also intend to take privacy-protecting actions.
2018
- "What Was That Site Doing with My Facebook Password?": Designing Password-Reuse NotificationsMaximilian Golla, Miranda Wei, Juliette Hainline, Lydia Filipe, Markus Dürmuth, Elissa Redmiles, and Blase UrIn Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Oct 2018
Password reuse is widespread, so a breach of one provider’s password database threatens accounts on other providers. When companies find stolen credentials on the black market and notice potential password reuse, they may require a password reset and send affected users a notification. Through two user studies, we provide insight into such notifications. In Study 1, 180 respondents saw one of six representative notifications used by companies in situations potentially involving password reuse. Respondents answered questions about their reactions and understanding of the situation. Notifications differed in the concern they elicited and intended actions they inspired. Concerningly, less than a third of respondents reported intentions to change any passwords. In Study 2, 588 respondents saw one of 15 variations on a model notification synthesizing results from Study 1. While the variations’ impact differed in small ways, respondents’ intended actions across all notifications would leave them vulnerable to future password-reuse attacks. We discuss best practices for password-reuse notifications and how notifications alone appear insufficient in solving password reuse.
- The Password Doesn’t Fall Far: How Service Influences Password ChoiceMiranda Wei, Maximilian Golla, and Blase UrIn Who Are You?! Adventures in Authentication (WAY), Oct 2018
Users often create passwords based on familiar words or things they like, using these passwords across many web services. But does the type of web service influence how users construct their password? In this paper, we observe how and how often passwords are specific to the services for which they were created. We analyze leaked passwords from five web services. We find that passwords from each service reflect the category of the service, often by including the name or semantic theme of the service. Through a qualitative analysis of passwords, we further identify unique characteristics of the passwords created for each service. Service-specific passwords can reveal other shared interests or demographics of that service’s userbase. This contextual perspective on password creation suggests improvements for site-specific blacklists and password-strength meters.
- Exploring User Mental Models of End-to-End Encrypted Communication ToolsRuba Abu-Salma, Elissa M. Redmiles, Blase Ur, and Miranda WeiIn 8th USENIX Workshop on Free and Open Communications on the Internet (FOCI 18), Oct 2018
- Your Secrets Are Safe: How Browsers’ Explanations Impact Misconceptions About Private Browsing ModeYuxi Wu, Panya Gupta, Miranda Wei, Yasemin Acar, Sascha Fahl, and Blase UrIn Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW ’18, Oct 2018
All major web browsers include a private browsing mode that does not store browsing history, cookies, or temporary files across browsing sessions. Unfortunately, users have misconceptions about what this mode does. Many factors likely contribute to these misconceptions. In this paper, we focus on browsers’ disclosures, or their in-browser explanations of private browsing mode. In a 460-participant online study, each participant saw one of 13 different disclosures (the desktop and mobile disclosures of six popular browsers, plus a control). Based on the disclosure they saw, participants answered questions about what would happen in twenty browsing scenarios capturing previously documented misconceptions. We found that browsers’ disclosures fail to correct the majority of the misconceptions we tested. These misconceptions included beliefs that private browsing mode would prevent geolocation, advertisements, viruses, and tracking by both the websites visited and the network provider. Furthermore, participants who saw certain disclosures were more likely to have misconceptions about private browsing’s impact on targeted advertising, the persistence of lists of downloaded files, and tracking by ISPs, employers, and governments.