Bio

I’m a research scientist at Google DeepMind where I help lead our AI Red Team. For the past twenty years I’ve worked in the public and private sector securing the application of Artificial Intelligence in high-stakes consequential environments. At Google DeepMind, I help define and lead the cross-functional AI Red and Blue “ReBl” team to ensure that foundational models are battle-tested with the rigor and scrutiny of real-world adversaries, and help drive research and tooling that will make this red-blue mindset scalable in preparation for AGI.

In my role as the Managing Director at MITRE Labs I led a team of over 200 scientists working in the public interest and built and led the AI Red Team that focused on deployed AI systems that can be susceptible to bias in their data, attacks involving evasion, data poisoning, model replication; and the exploitation of software flaws to deceive, manipulate, compromise, and render them ineffective. My team worked on developing methods to mitigate bias and defend against emerging ML attacks, securing the AI supply chain, and generally ensuring the trustworthiness of AI systems so they perform as intended in mission-critical environments. While at MITRE, my team in collaboration with many industry partners, published ATLAS (Adversarial Threat Landscape for AI Systems) - a knowledge base of adversary tactics, techniques, and case studies for machine learning (ML) systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research.

I firmly believe that AI’s potential will only be realized through collaborations that help produce reliable, resilient, fair, interpretable, privacy preserving, and secure technologies.