ML-Enabled Abuse
How people abuse ML systems, and how people perceive and respond to abusive AI-generated media.
AI Sexual Content • AI Provenance Signals • Bias in AI Media Moderation • Perceptions of AI Media
The Happy Lab studies how people shape the security, privacy, and real-world use of machine learning systems. We investigate where human factors create new vulnerabilities, where they unlock better defenses, and how to build safer AI systems in practice.
Research Areas
How people abuse ML systems, and how people perceive and respond to abusive AI-generated media.
AI Sexual Content • AI Provenance Signals • Bias in AI Media Moderation • Perceptions of AI Media
How sociotechnical factors impact real-world adoption of ML defenses.
How ML can be integrated into security-sensitive environments.
Usable security and privacy, system security, and evaluation of HCI methodology.
Symbolic Exec GUI • Sociodemographics & Security • Audit Log SoK • Bot Survey Fraud • Privacy Zones
Recent News
March 17, 2026
Our new paper, Signals of Provenance: Practices & Challenges of Navigating Indicators in AI-Generated Media for Sighted and Blind Individuals, was conditionally accepted. Congrats everyone!
January 19, 2026
Work with the SEFCOM and TSP Lab has been conditionally accepted to CHI 2026, congrats all!
September 22, 2025
A joint work with Lindsay Sanneman and Anil Murthy was accepted to NeurIPS LLM-Eval Workshop.
June 17, 2025
Jaron and Tanusree from the GPSLab were recently awarded Google Research Scholar Grant for their proposal: Designing Accessible Tools for Blind and Low-Vision People to Navigate Deepfake Media
Interested in working with the group?
The lab site is the best place for current openings, application details, and ways to get involved.