• Skip to homepage
  • Skip to main content
  • Skip to main navigation
  • Skip to meta navigation
Zühlke - zur Startseite
  • Business
  • Careers
  • Events
  • About us

Language navigation. The current language is english

  • Expertise
    • AI implementation
    • Cloud
    • Cybersecurity
    • Data solutions
    • DevOps
    • Digital strategy
    • Experience design
    • Hardware engineering
    • Managed services
    • Software engineering
    • Sustainability transformation
    Explore our expertise

    Highlight Case Study

    Airport concept

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Industries
    • Banking
    • Insurance
    • Healthcare providers
    • MedTech
    • Pharma
    • Industrial sector
    • Commerce & retail
    • Energy & utilities
    • Government & public sector
    • Transport
    Explore our industries

    Subscribe to recieve the latest news, event invitations & more!

    Sign up here
  • Case studies

    Spotlight case studies

    • Global Research Platforms and Zühlke are fighting Alzheimer's disease
    • Brückner Maschinenbau leverages GenAI to optimise efficiency by improving master data management
    • UNIQA: AI chatbot increases efficiency – 95% accuracy with half the effort
    Explore more case studies

    Highlight Case Study

    Airport concept

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Insights

    Spotlight insights

    • How to apply low-code technologies in the insurance industry
    • Retail CTO playbook for managing the tech transformation
    • DeepSeek and the rise of open-source AI: A game-changer for businesses?
    Explore more insights

    Highlight Insight

    AI adoption: Rethinking time and purpose in the workplace

    Learn more
  • Academy
  • Contact
    • Austria
    • Bulgaria
    • Germany
    • Hong Kong
    • Portugal
    • Serbia
    • Singapore
    • Switzerland
    • United Kingdom
    • Vietnam

    Subscribe to recieve the latest news, event invitations & more!

    Sign up here
Zühlke - zur Startseite
  • Business
  • Careers
  • Events
  • About us
  • Expertise
    • AI implementation
    • Cloud
    • Cybersecurity
    • Data solutions
    • DevOps
    • Digital strategy
    • Experience design
    • Hardware engineering
    • Managed services
    • Software engineering
    • Sustainability transformation
    Explore our expertise

    Highlight Case Study

    Airport concept

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Industries
    • Banking
    • Insurance
    • Healthcare providers
    • MedTech
    • Pharma
    • Industrial sector
    • Commerce & retail
    • Energy & utilities
    • Government & public sector
    • Transport
    Explore our industries

    Subscribe to recieve the latest news, event invitations & more!

    Sign up here
  • Case studies

    Spotlight case studies

    • Global Research Platforms and Zühlke are fighting Alzheimer's disease
    • Brückner Maschinenbau leverages GenAI to optimise efficiency by improving master data management
    • UNIQA: AI chatbot increases efficiency – 95% accuracy with half the effort
    Explore more case studies

    Highlight Case Study

    Airport concept

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Insights

    Spotlight insights

    • How to apply low-code technologies in the insurance industry
    • Retail CTO playbook for managing the tech transformation
    • DeepSeek and the rise of open-source AI: A game-changer for businesses?
    Explore more insights

    Highlight Insight

    AI adoption: Rethinking time and purpose in the workplace

    Learn more
  • Academy
  • Contact
    • Austria
    • Bulgaria
    • Germany
    • Hong Kong
    • Portugal
    • Serbia
    • Singapore
    • Switzerland
    • United Kingdom
    • Vietnam

    Subscribe to recieve the latest news, event invitations & more!

    Sign up here

Language navigation. The current language is english

Homepage zuehlke.com

All industries

Fair AI: debiasing techniques that actually work

How can we rely on AI if AI is unreliable? That’s the big question at the centre of a range of multifaceted debates on the future of artificial intelligence...

December 16, 20245 Minutes to Read
Close-up of a woman's face overlayed with abstract depictions of data. Data visualisations are also reflected in the lenses of her glasses
With insights from
Tabi, Software Engineer Intern

Tabi Day

Professional Data Engineer
A headshot of Dawson who has cropped brown hair and wears a dark red top

Dawson Silkenat

Professional Data Engineer

Alongside hallucinatory output and security weaknesses, one key area where AI’s reliability is often called into question is bias – and the ongoing challenge of ensuring fair AI.

After all, if you’re asking an AI tool to answer a question, but that answer is inherently skewed, what good is it? And more importantly: what damage could that answer do if taken at face value?

The ubiquity of bias

Bias is everywhere, and it’s notoriously difficult to eradicate.

That’s as true in AI development as it is in society at large, where inherent biases fuel our personal decisions and systemic biases disadvantage or favour certain groups.

On a base level, bias manifests in decisions and opinions based on the information we’ve absorbed – whether or not that information is accurate or a fair representation of the whole. And it’s exactly the same in AI. The models used in everything from ChatGPT to machine learning radiography tools are trained on finite datasets, and it’s rare for those datasets to be fully representative.

Take Google’s Gemini, for example. A recent $60m-per-year deal means the GenAI chatbot has access to the entirety of Reddit’s user-generated content for training purposes. But there’s bias in ‘them there hills’. Reddit’s audience skews male, with college-educated Americans making up around half of its 500m user base. So those users are naturally going to represent (and produce) a relatively narrow set of opinions when compared to the global human experience.

On the one hand, more data is often better. On the other, no dataset is entirely fair, which makes developing and debiasing AI models a tricky task. As the old saying goes, crap in, crap out.

AI bias: feeding the fire

Bias in generative AI is potentially dangerous, but LLMs are just one piece of the larger AI puzzle. Machine learning and foundational models are other verticals of artificial intelligence that can produce problematic output if their training data isn’t adequately representative.

An AI-powered tool designed to find cancerous tumours in X-rays, for instance, could be at risk of giving biased false negatives if its training data only represents one demographic. A tool designed to recommend drug dosages might put patients at risk if the information shaping those recommendations is limited in scope.

Medical industries are at the sharp end of AI bias, then, but they’re not alone. The financial sector is another potential breeding ground for bias-driven risk. Algorithmic bias might result in discriminatory decisions on loans or credit ratings – and that has the potential to compound the systemic issues that contribute to biased datasets in the first place.

The EU’s AI Act, as a potential yardstick of regulatory thinking on this topic, sets out to curb AI use for social scoring applications for exactly these reasons. But it doesn’t need to be big decisions and high-risk output for bias to cause problems; bias in a more every day, mundane AI setting can be just as problematic.

If you wanted to post a job application, for example, and an AI tool suggests that a given demographic is more likely to apply, that’ll likely affect your thinking, the ad’s wording, and the kind of applicants you shortlist. The result would perpetuate the stereotype; biased outcomes fuelling bias at the input level.

The issue is that bias can exist or be introduced at any stage of AI development and deployment, and there’s no ‘one size fits all’ solution to minimising it. In fact, debiasing AI models can even lead to reduced accuracy through blindness to determining factors – something we call the ‘Bias accuracy trade-off’. Imagine you’re marking exam papers, for example, and you decide to award everyone a ‘B’, regardless of their answers. That’s arguably the least biased way to mark the papers. But it’s also not very accurate.

All this poses the big, obvious question: how can we manage to build fair, accurate AI tools without inherent biases?

A framework for fair AI

…You can’t. Well, not entirely. And that’s an important starting point: realising that – unless you can sample and mine data from every single person on Earth – there will always be bias in your modelling.

So the job becomes one of mitigation and minimisation on an ongoing basis. And, importantly, at every stage of an application’s development and deployment. From the outset, you need to factor in a cyclical, systematic process of iteration, measurement, and improvement of your input data, AI model, and metrics:

A venn diagram showing that fair AI's foundations are at the intersection of input data, AI models, and measurement

1. Input data

Systematically join datasets as possible, so as to create reproducible versions of input data groups that can be tested against one another.

2. AI models<

Trial a variety of techniques, using modularised code that can test models, parameters, and debiasing techniques.

3. Measurement

Test, measure, and visualise the results – then use this to inform further adjustments to your datasets.

Do steps one and two right and you’ll generate a lot of data, based on a whole bunch of tested variables. To put that data to work, you’ll first need to create a baseline by bluntly debiasing your model and measuring for accuracy against fairness. Then you can map your more systematic techniques against that baseline to compare results – consistent performance will highlight the best debiasing methods.

Our recommendations here rely on formalising a set of collaborative best practices based on several key principles:

  • Systematic testing and measurement
  • Diverse and representative data collection
  • Continuous monitoring and iteration
  • Shared responsibility across multiple stakeholders
  • Transparency in decision-making
  • Understanding and managing trade-offs

AI fairness: an ongoing group effort

This isn’t a one-and-done process, and it’s not something that you can run in the background to check a ‘fair AI’ box. Instead, it’s a framework built around constant iteration to provide a holistic view of bias in the AI pipeline – one that requires ongoing human intervention and multidisciplinary buy-in.

Debiasing, then, is a mission for everyone along the development pipeline, not just data scientists. By working together, data teams, product leaders, and regulatory bodies can enable transparent, auditable, and robust AI decision-making that keeps bias in check.

Build beyond bias: Zühlke’s responsible AI framework provides a complete set of guidelines for designing truly safe, ethical, and sustainable AI tools.

Explore more Insights

MedTech

Making products and manufacturing more sustainable

Learn more
Sustainable Healthcare Products

Podcast: data transparency and education, with Anne Thielen

Learn more
microphone
Commerce & retail

Computer vision in retail: How AI-enabled video is giving retail stores a new set of eyes

Learn more
Digital eye representing computer vision in retail
Discover all Insights

Get to know us

  • About us
  • Impact & commitments
  • Facts & figures
  • Careers
  • Event Hub
  • Insights Hub
  • News sign-up

Working with us

  • Our expertise
  • Our industries
  • Case studies
  • Partner ecosystem
  • Training Academy
  • Contact us

Legal

  • Privacy policy
  • Cookie policy
  • Legal notice
  • Modern slavery statement
  • Imprint

Request for proposal

We appreciate your interest in working with us. Please send us your request for proposal and we will contact you within 72 hours.

Request for proposal
© 2025 Zühlke Engineering AG

Follow us

  • External Link to Zühlke LinkedIn Page
  • External Link to Zühlke Facebook Page
  • External Link to Zühlke Instagram Page
  • External Link to Zühlke YouTube Page

Language navigation. The current language is english