Home GOTO Experts Katharine Jarmul...

Katharine Jarmul is a privacy activist and an internationally recognized data scientist and lecturer who focuses her work and research on privacy and security in data science and machine learning. You can follow her work via her newsletter, Probably Private or in her recently published book, Practical Data Privacy (O'Reilly 2023) now also available in German and Polish.

She is a passionate and internationally recognized data scientist, programmer, lecturer and writer.

A few of Katharine's books:

Upcoming masterclasses featuring Katharine Jarmul

Private by Design and Secure by Default AI Products

In charge of deploying AI but not sure how to protect and secure these systems? Want to learn how to architect around potential security vulnerabilities and privacy gotchas? Do you need to design AI systems that meet trust, privacy and security guidelines?

This intensive, one-day masterclass goes beyond theoretical concepts, empowering experienced engineers and architects to proactively build privacy and security into AI products by design.

Move beyond reactive measures by learning:

  • Real-World Threat Modeling: Identify vulnerabilities in your AI systems.
  • Hands-On Red Teaming: Execute and evaluate attacks on models.
  • Meta Prompt Engineering & Guardrails: Learn how to create useful and more privacy-aware meta prompts. Use guardrails to identify insecure prompts or questionable AI output.
  • Data Flow Analysis, Risk Assessment, Privacy Controls: Map and mitigate privacy and confidentiality risks in your data workflows. Choose appropriate protections for identification, sanitization and pseudonymization.
  • Practical Model Evaluation Strategies: Build evaluation datasets and integrate security & privacy testing into your deployment workflow.

Designed for those just starting out or already familiar with AI concepts, this masterclass provides actionable insights, practical tools, and a clear framework for building more trustworthy AI solutions. You’ll leave equipped to design and deploy better privacy and security within your organization.

Note: No deep math or stats background required (although it’s great if you have one!).

Tuesday Sep 30 @ 09:00 | Copenhagen, Denmark

Reserve your spot now

Private by Design and Secure by Default AI Products

In charge of deploying AI but not sure how to protect and secure these systems? Want to learn how to architect around potential security vulnerabilities and privacy gotchas? Do you need to design AI systems that meet trust, privacy and security guidelines?

This intensive, one-day masterclass goes beyond theoretical concepts, empowering experienced engineers and architects to proactively build privacy and security into AI products by design.

Move beyond reactive measures by learning:

  • Real-World Threat Modeling: Identify vulnerabilities in your AI systems.
  • Hands-On Red Teaming: Execute and evaluate attacks on models.
  • Meta Prompt Engineering & Guardrails: Learn how to create useful and more privacy-aware meta prompts. Use guardrails to identify insecure prompts or questionable AI output.
  • Data Flow Analysis, Risk Assessment, Privacy Controls: Map and mitigate privacy and confidentiality risks in your data workflows. Choose appropriate protections for identification, sanitization and pseudonymization.
  • Practical Model Evaluation Strategies: Build evaluation datasets and integrate security & privacy testing into your deployment workflow.

Designed for those just starting out or already familiar with AI concepts, this masterclass provides actionable insights, practical tools, and a clear framework for building more trustworthy AI solutions. You’ll leave equipped to design and deploy better privacy and security within your organization.

Note: No deep math or stats background required (although it’s great if you have one!).

Wednesday Dec 3 @ 09:00 | Melbourne, Australia

Reserve your spot now

Private by Design and Secure by Default AI Products

In charge of deploying AI but not sure how to protect and secure these systems? Want to learn how to architect around potential security vulnerabilities and privacy gotchas? Do you need to design AI systems that meet trust, privacy and security guidelines?

This intensive, one-day masterclass goes beyond theoretical concepts, empowering experienced engineers and architects to proactively build privacy and security into AI products by design.

Move beyond reactive measures by learning:

  • Real-World Threat Modeling: Identify vulnerabilities in your AI systems.
  • Hands-On Red Teaming: Execute and evaluate attacks on models.
  • Meta Prompt Engineering & Guardrails: Learn how to create useful and more privacy-aware meta prompts. Use guardrails to identify insecure prompts or questionable AI output.
  • Data Flow Analysis, Risk Assessment, Privacy Controls: Map and mitigate privacy and confidentiality risks in your data workflows. Choose appropriate protections for identification, sanitization and pseudonymization.
  • Practical Model Evaluation Strategies: Build evaluation datasets and integrate security & privacy testing into your deployment workflow.

Designed for those just starting out or already familiar with AI concepts, this masterclass provides actionable insights, practical tools, and a clear framework for building more trustworthy AI solutions. You’ll leave equipped to design and deploy better privacy and security within your organization.

Note: No deep math or stats background required (although it’s great if you have one!).

Wednesday Dec 10 @ 09:00 | Sydney, Australia

Reserve your spot now

Upcoming conference sessions featuring Katharine Jarmul

Hacking AI Systems: How to (Still) Trick Artificial Intelligence

How easy is it to fool or trick today's AI systems? In this talk, we'll wander through the field of adversarial AI/ML, looking at how attacks and AI systems have evolved over the past 10 years. You'll learn more about how deep learning works by investigating how, when and why it breaks and walk away with open security questions and some notebooks to keep learning and hacking!

Thursday Oct 2 @ 11:15 @ GOTO Copenhagen 2025

Get conference pass

Content featuring Katharine Jarmul

43:49
Computers are Stupid: Protecting "AI" from Itself
Computers are Stupid: Protecting "AI" from Itself
GOTO Berlin 2018
33:07
Practical Data Privacy
Practical Data Privacy
GOTO Amsterdam 2023
34:37
Encrypted Computation: What if Decryption Wasn’t Needed?
Encrypted Computation: What if Decryption Wasn’t Needed?
GOTO Copenhagen 2024
Browse all experts

Here