ML-Enabled Abuse
How people abuse ML systems, and how people perceive and respond to AI-generated media, deepfakes, and other emerging forms of sociotechnical harm.
AI Sexual Content • Bias in AI Media Moderation • Provenance Indicators
The Happy Lab studies how people shape the security, privacy, and real-world use of machine learning systems. We investigate where human factors create new vulnerabilities, where they unlock better defenses, and how to build safer AI systems in practice.
Research Areas
How people abuse ML systems, and how people perceive and respond to AI-generated media, deepfakes, and other emerging forms of sociotechnical harm.
AI Sexual Content • Bias in AI Media Moderation • Provenance Indicators
How ML can be integrated into security-sensitive environments, including education, analyst tooling, and practical support systems for defenders.
How sociotechnical factors shape whether ML defenses are understandable, adoptable, and useful in real security and privacy workflows.
Usable security, system security, and broader evaluation of how people interact with security-critical software beyond ML-specific settings.
Symbolic Exec GUI • Sociodemographics & Security • Audit Log SoK
Recent News
March 17, 2026
Our new paper, Signals of Provenance: Practices & Challenges of Navigating Indicators in AI-Generated Media for Sighted and Blind Individuals, was conditionally accepted. Congrats everyone!
January 19, 2026
Work with the SEFCOM and TSP Lab has been conditionally accepted to CHI 2026, congrats all!
September 22, 2025
A joint work with Lindsay Sanneman and Anil Murthy was accepted to NeurIPS LLM-Eval Workshop.
June 17, 2025
Jaron and Tanusree from the GPSLab were recently awarded Google Research Scholar Grant for their proposal: Designing Accessible Tools for Blind and Low-Vision People to Navigate Deepfake Media
Interested in working with the group?
The lab site is the best place for current openings, application details, and ways to get involved.