We are a group of researchers discovering how to better protect people by understanding how humans interact with software. While we work on many areas of computer security, we primarily focus on the quickly evolving landscape of Machine Learning (ML), understanding how to defend against new forms of ML-enabled Abuse, and how to harness ML-powered tools for security systems. Jaron Mink directs the Happy Lab at Arizona State University in the School of Computing and Augmented Intelligence.
We use a combination of social science methods and software evaluation techniques. In human-subject work, we learn user perceptions with observational techniques (e.g., interviews/surveys) and discover how factors influence security with controlled experiments. In software, we use benchmarking techniques to rigorously compare systems. Combined, we holistically assess software security via technical and usable metrics.
Our lab is actively recruiting PhD students, master's students, and undergraduates! If you're interested in joining the lab, learn more here!
Our work discovers how human interaction impacts ML security in two ways: How human factors can be 1) exploited to reduce security and 2) harnessed to improve security. Since ML-enabled abuse is becoming increasingly common, we investigate how lay users perceive and react to new attacks, e.g., how social media users react to deepfakes. As ML is beginning to be applied in security-critical systems, we evaluate how usable these tools are for technical users, e.g., how easy it is for ML developers to apply security defenses.
AI-generated (AIG) content represents a pressing societal concern; it can be used to create fake personas or impersonate real persons to produce misinformation, conduct scams, or destroy reputations. As these attacks often rely on how real people perceive this content, understanding this perception is critical to understanding the harms of AIG content, and potential mitigations. Our lab researchs how people understand and perceive this emerging threat, as well as whether human-in-the-loop centered defenses are effective.
ML is increasingly being implemented in a variety of security-sensitive applications, with researchers often predicting practitioners’ needs and security concerns; however, little work has investigated the accuracy of these predictions, and the practical utility of these systems in the field. Our work bridges this gap by discovering what needs security and ML practitioners have in ML-enabled applications. Overall, our work discovers if academia's research agenda misaligns with practitioner requirements, ways to rectify them, and new paradigms for human-ML interaction in cybersecurity settings.
We're excited to work with the SEFCOM Lab to improve and evaluate scalable cybersecurity education at pwn.college. We're currently creating modules for practical adversarial machine lessons, discovering areas of improvement for courses, and improving personalized tutoring. Additionally, we work on how to improve diversity and inclusivity in security research and cybersecurity workforce at large.