Hands-on technical workshop

Practical data anonymization for real-world datasets

Learn how to reduce re-identification risk while preserving analytical utility with concrete methods, labs, and risk-based decision frameworks.

Wednesday, April 22, 2026 09:00-17:00 CET Zurich + online English

Who this workshop is for

Designed for practitioners who work with sensitive or regulated data and need practical anonymization decisions they can defend.

  • Roles: data scientists, analysts, data engineers, privacy officers, compliance teams, and applied researchers
  • Domains: healthcare, insurance, fintech, mobility, public sector, and academic collaborations
  • Level: intermediate (best for participants comfortable with Python, SQL, or statistics basics)

At a glance

One full day, hands-on, with guided labs.

Capacity: 36 participants (24 in-person + 12 online).

Key takeaways

By the end of the workshop, participants can choose methods based on risk, utility, and implementation constraints.

Model threats clearly

Frame realistic re-identification risks using context, attacker assumptions, and auxiliary data.

Apply classic models

Use k-anonymity, l-diversity, and t-closeness with awareness of strengths and common pitfalls.

Use modern privacy controls

Understand differential privacy concepts and practical synthetic data tradeoffs.

Measure utility impact

Compare anonymization options with utility metrics and documented risk decisions.

Agenda snapshot

Detailed timing and prerequisites are listed on the Agenda page.

Morning foundation

Scope, definitions, legal context, and threat modeling fundamentals.

Methods deep-dive

Classical privacy models and utility/risk evaluation methods.

Hands-on lab blocks

Notebook-based anonymization tasks with guided checks and peer review.

Implementation playbook

Operational rollout patterns, governance checkpoints, and next steps.

Scope and definitions

Clear terminology is essential before discussing methods or legal expectations.

  • Anonymization: Irreversible processing intended to prevent identification with reasonable means.
  • Pseudonymization: Identifiers replaced but re-linking remains possible with separate key material.
  • De-identification: Broad umbrella term for reducing direct and indirect identifiers.
  • Masking: Field-level obfuscation that may or may not sufficiently reduce re-identification risk.

Workshop coverage

  • k-anonymity, l-diversity, t-closeness
  • differential privacy concepts in practical workflows
  • synthetic data use cases and risk boundaries
  • utility metrics and release governance
  • responsibly framed attack examples and defenses

Educational content only, not legal advice.

Ethical and legal context

  • Removing names alone is not sufficient; risk depends on context and linkage opportunities.
  • Examples are educational and should be adapted to your organizational and legal context.
  • Regulatory references include GDPR and Swiss FADP/DPA context resources.

Data handling transparency

  • Registration data is used only for workshop operations and invoicing.
  • No sale or sharing of attendee data with third-party advertisers.
  • Mailing list is explicit opt-in with one-click unsubscribe.
  • Cookie policy: essential session cookies only unless stated otherwise.
Read privacy details

Reserve your seat

Limited capacity to keep lab support practical and interactive.