Prospective students: Thanks for your interest in our research group! Please fill out this form: Joining the Sociotechnical Safety Lab and I'll reach out to you if there is a fit for our current projects and needs.
  • For PhD admissions: I am admitting students to UW HCDE, and open to co-advising in CSE and the iSchool! The application deadline for HCDE is December 2, 2026. For this cycle, I am looking for at most 1-2 students to work on topics in sociotechnical safety, decentralized AI systems (e.g., small and private language models), and social and emotional uses of AI.
  • For undergrad and Master's research assistants: There may be opportunities to work on our existing projects (described below) on a case-by-case basis. We will also be hosting 1-2 Directed Research Groups at HCDE in the fall.

Introducing the Sociotechnical Safety Lab

Computing technologies have brought incredible innovations to how we live, love, work, and play. Along the way, they’ve also become mediators for harm. From personal computing to spyware, workplace collaboration to workplace surveillance, AI companions to AI safety, the tools that are expanding and connecting our lives are also exacerbating inequity, and enabling abuse.

The Sociotechnical Safety Lab, led by PI Emily Tseng, explores how computing technologies mediate harm, how to intervene, and what it means to do so. Our research develops the systems, interventions, and design principles that we need to make digital technology safer for everyone.

We are driven by the conviction that technology safety is sociotechnical and ongoing: not only a technical property of secure, private, or better-designed systems, but also a relational and political achievement; and not a problem that can be solved, but a long-term practice of research and innovation as technologies and societies evolve.

Our work pursues two complementary paths:

1. We ensure existing technologies support people navigating digital life. We develop frameworks to understand and evaluate how existing technologies shape our capacity to connect, relate, know, trust, and live well---for not only technology users, but also broader society. This foundational work informs interventions supporting people managing digital safety for themselves and their communities, through novel systems, design patterns, and policy recommendations. Where appropriate, we also deploy direct services, to build care infrastructure (cf. CHI '22) for harm survivors.

2. We build alternatives, in close partnership with communities. In parallel, we develop community-controlled computing infrastructure, to build capacity for people to own, operate, and govern their own technologies. Our community-partnered work creates open-source toolkits, governance frameworks, and technology literacy curricula to give people more equitable and community-accountable technology options.

Across both paths, we innovate on core methods in HCI and AI, to make them suitable for digital safety. Harm reduction in technology design requires researchers and research participants to engage deeply with stories of the worst experiences of others' lives. We develop novel systems, techniques, and best practices to make digital safety research safer for researchers and participants alike: practice innovations that we treat as contributions in their own right.

Some of our current projects include:

  • Advancing human relational capacities amidst social and emotional uses of AI [CHI 26]
  • Building community-controlled AI tools with and for journalists [FAccT 24, FAccT 25]
  • Developing trauma-informed approaches to studying digital safety harms [CSCW 25]
  • Balancing burden and benefit in synthetic representation in safety engineering [CSCW 24]

Our research contributes to the growing field of digital safety, spanning human-computer interaction, computer security and privacy, participatory design, and AI. In addition to our foundational and methodological research, we are invested in building digital safety as a field, including through scholarly organizing and interdisciplinary community-building across HCI, security, and AI. Emily has led convenings such as alt-FAccT 2025, the unofficial NYC-based satellite gathering for ACM FAccT, and the CHI 2026 workshop on Social and Emotional Uses of AI. She is also active in the Computing Community Consortium, and recently led the development of training resources for research supporting at-risk users: in leading and mentoring at-risk user research, and handling the problem of problem selection.

We are based at the Department of Human Centered Design and Engineering (HCDE) at the University of Washington. Our research is supported by the National Science Foundation, the UW Center for an Informed Public, the UW Community-Engaged Computing Initiative, and unrestricted gifts from Google and Microsoft.


Lab Members

PhD Students

Hanna Barakat is a first-year PhD student at the University of Washington. She is supported by the NSF Graduate Research Fellowship and the UW College of Engineering Dean's Fellowship. Hanna's research brings together Human-Computer Interaction and Science and Technology Studies to examine how emerging technologies reproduce systemic harm, and how systematically marginalized communities creatively engage technology to build alternatives. She is focused on the intersection between (1) physical infrastructures (e.g., such as agriculture technologies, information and communications technologies, and data centers) and (2) information systems (e.g., peer-to-peer networks, and knowledge representation in AI-based technologies). Hanna has worked on a range of applied research projects, translating insights across academia, industry, civil society, and community advocacy groups. She graduated with honors from Brown University. She is also an artist with AIxDesign and a competitive athlete who represented Palestine in the Olympic Games. You can reach her at hbarakat@uw.edu or on LinkedIn.

Matthew Bilik is a first-year Ph.D. student in human-centered design and engineering (HCDE) at the University of Washington. He is supported by the NSF Graduate Research Fellowship and the UW College of Engineering Dean's Fellowship. Matt is a technologist and a social scientist interested in studying the views and practices of AI researchers and engineers, the AI moment's historical roots, and AI's implications for safety, security, privacy, and wellbeing. He uses his findings from this work to inform technology policy. Presently, Matt is also interested in designing and building thoughtful alternatives to AI infrastructures and techniques (e.g., data cooperatives and small language models). In this line of work, he develops prototypes through community-engaged, participatory methods (most recently with journalists). He also studies the impacts of these deployments. Matt is a graduate of the University of Michigan where he studied computer science and sociology (with high honors), and completed a minor in science, technology, and society studies. He wrote his undergraduate thesis on data privacy discourses in the 1965 and 1966 Hearings on the Computer and Invasion of Privacy. He is also a former data editor, managing online editor, and data journalist at The Michigan Daily.

Amy W. Xiao is a first year PhD student and NSF GRFP fellow studying digital safety for marginalized groups, with a focus on peer- and community-based support against harm both facilitated and enacted by technology. They draw from theories of disability justice and WOC feminism to reflect on how we design, govern, and resist technology. Some specific areas of digital safety that Amy is thinking about are: collaborative sensemaking and help-seeking by immigrants for tech-mediated scams; advice-seeking from AI for interpersonal and intimate risk navigation; and frameworks of race and privacy within the surveillance of care workers. Previously, Amy was at Brown University where they worked with the Brown HCI Lab (now Socio-HCI @ Brown) and the Human Trafficking Research Cluster. Get in touch amywxiao@uw.edu or on LinkedIn!