ML-Enabled Abuse
How people abuse ML systems, and how people perceive and respond to abusive AI-generated media.
AI Sexual Content • Bias in AI Media Moderation • Perceptions of AI Media
The Happy Lab studies how people shape the security, privacy, and real-world use of machine learning systems. We investigate where human factors create new vulnerabilities, where they unlock better defenses, and how to build safer AI systems in practice.
Recent News
April 15, 2026 Our paper with Drs. Lucy Qin and Elissa M. Redmiles from Georgetown University, "Unlimited Realm of Exploration and Experimentation": Methods and Motivations of AI-Generated Sexual Content Creators, was accepted to FAccT 2026!
January 19, 2026 Work with the SEFCOM and TSP Lab, "I Can SE Clearly Now: Investigating the Effectiveness of GUI-based Symbolic Execution for Software Vulnerability Discovery", has been conditionally accepted to CHI 2026, congrats all!
September 22, 2025 A joint work with Lindsay Sanneman and Anil Murthy was accepted to NeurIPS LLM-Eval Workshop.
June 17, 2025 Jaron and Tanusree from the GPSLab were recently awarded Google Research Scholar Grant for their proposal: "Designing Accessible Tools for Blind and Low-Vision People to Navigate Deepfake Media"
Research Areas
How people abuse ML systems, and how people perceive and respond to abusive AI-generated media.
AI Sexual Content • Bias in AI Media Moderation • Perceptions of AI Media
How sociotechnical factors impact real-world adoption of ML defenses.
How ML can be integrated into security-sensitive environments.
Usable security and privacy, system security, and evaluation of HCI methodology.
Symbolic Exec GUI • Sociodemographics & Security • Audit Log SoK • Bot Survey Fraud • Privacy Zones
Interested in working with the group?
The lab site is the best place for current openings, application details, and ways to get involved.