Home GOTO Experts Katharine Jarmul...

Katharine Jarmul is a privacy activist and an internationally recognized data scientist and lecturer who focuses her work and research on privacy and security in data science and machine learning. You can follow her work via her newsletter, Probably Private or in her recently published book, Practical Data Privacy (O'Reilly 2023) now also available in German and Polish.

She is a passionate and internationally recognized data scientist, programmer, lecturer and writer.

A few of Katharine's books:

Upcoming masterclasses featuring Katharine Jarmul

Private by Design and Secure by Default AI Products

In this masterclass you'll design an AI product from conception, through architecture, risk and threat modeling and into your deployment and testing plan, ensuring that privacy, transparency and security are built in. Along the way, you'll learn about common privacy and security anti-patterns in large-scale deep learning/AI systems and design better approaches that both communicate and enforce better trust. By putting on your product, design, risk, architect, engineer and hacker hats, you'll leave the room with a more holistic and multidisciplinary perspective.

  • Expect hands-on exercises (and some code!) around:
  • Discover privacy and security antipatterns in AI Product design
  • Identify and evaluate privacy risk in AI systems
  • Map data and user flows to identify potential privacy issues
  • Evaluate AI-specific privacy and security threats/attacks
  • Design and review architectures, informed by risk and threat analysis
  • Evaluate and integrate use case specific guardrails and other potential technological solutions (i.e. privacy technologies
  • Build evaluation datasets and pipelines
  • Defining and measuring success

You will leave the class informed by the latest best practices and information around building privacy-first, secure AI systems -- and hopefully inspired to take some ideas directly back to your AI, software or platform engineering work.

Wednesday Dec 3 @ 09:00 | Melbourne, Australia

Reserve your spot now

Private by Design and Secure by Default AI Products

In this masterclass you'll design an AI product from conception, through architecture, risk and threat modeling and into your deployment and testing plan, ensuring that privacy, transparency and security are built in. Along the way, you'll learn about common privacy and security anti-patterns in large-scale deep learning/AI systems and design better approaches that both communicate and enforce better trust. By putting on your product, design, risk, architect, engineer and hacker hats, you'll leave the room with a more holistic and multidisciplinary perspective.

Expect hands-on exercises (and some code!) to: • Discover privacy and security antipatterns in AI Product design • Identify and evaluate (regulatory) privacy risk in AI systems • Map data and user flows to identify potential privacy issues • Evaluate AI-specific privacy and security threats/attacks • Design and review architectures, informed by risk and threat analysis • Evaluate and integrate use case specific guardrails and other potential technological solutions (i.e. leading privacy technologies) • Build evaluation datasets and pipelines • Define and measure success You leave the class informed by the latest best practices and information around building privacy-first, secure AI systems -- and hopefully inspired to take some ideas directly back to your AI, software or platform engineering work.

Tuesday Sep 30 @ 09:00 | Copenhagen, Denmark

Reserve your spot now

Private by Design and Secure by Default AI Products

In this masterclass, you'll design an AI product from conception, through architecture, risk and threat modeling and into your deployment and testing plan, ensuring that privacy, transparency and security are built in. Along the way, you'll learn about common privacy and security anti-patterns in large-scale deep learning/AI systems and design better approaches that both communicate and enforce better trust. By putting on your product, design, risk, architect, engineer and hacker hats, you'll leave the room with a more holistic and multidisciplinary perspective.

  • Expect hands-on exercises (and some code!) around:
  • Discover privacy and security antipatterns in AI Product design
  • Identify and evaluate privacy risk in AI systems
  • Map data and user flows to identify potential privacy issues
  • Evaluate AI-specific privacy and security threats/attacks
  • Design and review architectures, informed by risk and threat analysis
  • Evaluate and integrate use case specific guardrails and other potential technological solutions (i.e. privacy technologies
  • Build evaluation datasets and pipelines
  • Defining and measuring success

You will leave the class informed by the latest best practices and information around building privacy-first, secure AI systems -- and hopefully inspired to take some ideas directly back to your AI, software or platform engineering work.

Wednesday Dec 10 @ 09:00 | Sydney, Australia

Reserve your spot now

Content featuring Katharine Jarmul

43:49
Computers are Stupid: Protecting "AI" from Itself
Computers are Stupid: Protecting "AI" from Itself
GOTO Berlin 2018
33:07
Practical Data Privacy
Practical Data Privacy
GOTO Amsterdam 2023
Encrypted Computation: What if Decryption Wasn’t Needed?
Encrypted Computation: What if Decryption Wasn’t Needed?
GOTO Copenhagen 2024
Browse all experts

Here