Marika Swanberg

AI Agent Safety Engineer & Researcher in Google Chrome

About Me

I am an AI Safety Engineer and Researcher on the Chrome team at Google in New York City. I design and productionize safety mechanisms for AI agents (see this blog post featuring my team's work). I was briefly a part of the Privacy Sandbox where I did in-house differential privacy research for private model training at scale.

Prior to joining Google, I completed my PhD in Computer Science at Boston University, where I was advised by Adam Smith and served as a Hariri Institute Graduate Student Fellow. My research focused on differential privacy , and I also worked on semi-related topics like: measuring pretraining data memorization by LLMs, formalizing the Right to Be Forgotten, and modeling secure mesaging in the universal composability framework.

Persistent themes in my work

Some prior dabblings

Selected Research & Publications

Measuring memorization in language models via probabilistic extraction
Jamie Hayes, Marika Swanberg, Harsh Chaudhari, Itay Yona, Ilia Shumailov, Milad Nasr, Christopher A Choquette-Choo, Katherine Lee, A Feder Cooper
NAACL 2025

We introduce probabilisitic discoverable extraction, a new definition of memorization that takes into account sampling methods used by LLMs and show that it provides a more nuanced quantification of training data memorization in a realistic adversarial setting compared to previous measures.

[Link to Paper]
Beyond the Worst Case: Extending Differential Privacy Guarantees to Realistic Adversaries
Marika Swanberg, Meenatchi Sundaram Muthu Selva Annamalai, Jamie Hayes, Borja Balle, Adam Smith
ArXiv

This work creates a framework to compute high-probability guarantees for DP mechanisms against more realistic classes of attackers rather than worst-case theoretical adversaries. In particular it allows us to do "canary-less" auditing of LLMs in one run.

[Link to Paper]
Control, Confidentiality, and the Right to Be Forgotten
Aloni Cohen, Adam Smith, Marika Swanberg, Prashant Nalini Vasudevan
ACM SIGSAC Conference on Computer and Communications Security (CCS 2023)

Explores how deletion should be formalized in complex systems, critiquing current machine unlearning definitions.

[Link to Paper]
Differentially Private Sampling from Distributions
Sofya Raskhodnikova, Satchit Sivakumar, Adam Smith, Marika Swanberg
NeurIPS 2022 and SIAM 2025

Investigates the complexity of sampling from a distribution while maintaining privacy, offering new lower bounds for fundamental statistical tasks. Fun fact: the main lower bound technique came from my and Satchit's final project for Sofya's Sublinear Algorithms course!

[Link to Journal Version]

Other things about me

Contact

Feel free to reach out, especially if you'd like to invite me for an in-person talk in a nice destination.