Data Strategy

Responsible AI: how to develop ethical AI applications

Adopting safe, ethical, and sustainable practices around AI is a moral, economic, and regulatory imperative. Put ethical AI thinking into practice with our responsible AI framework. 

Female engineer examining a robotic arm in an office
  • Putting proactive guardrails in place now can help safeguard the future of AI 
  • Explainability and interpretability are the bedrock of responsible AI business use cases 
  • Zühlke’s responsible AI framework codifies the process 
9 minutes to read
Author

Adopting safe, ethical, and sustainable practices around AI isn’t just a moral imperative. It’s set to be a global mandate with the onset of AI regulation. Here we explore everything your organisation needs to know about the future of responsible AI (artificial intelligence). 

For many, AI is both a powerful enabler and a Pandora’s box.

It’s already capable of incredibly creative and algorithmic thinking. But ever-increasing smarts alone won’t make it ready to police itself. Instead, human hands need to help shape AI into an ethical, trustworthy tool – one that can unlock new business value without landing anyone in hot water. 

Managing that process relies on understanding a few core concepts around the potential – and potential pitfalls – that this burgeoning technology is in the midst of opening up. But really, it all boils down to three things: the what, the why, and the how behind responsible thinking. 

First up, the big question: what does the word ‘responsible’ even mean when it comes to AI? 

What is responsible AI? 

Responsibility and ethics aren’t exactly the same thing, but they do have common roots. When it comes to artificial intelligence – and technology in general – they both mean operating openly, and from the seed of good intentions.  

AI capabilities are changing rapidly. So while it’s impossible to set rules in place that predict every use case, what you can – and should – do is be willing to show people what’s going on behind the curtain. 

It's very hard to have ethical AI if you don't know what the AI is doing. That's why you need a framework for responsible AI that includes explainability and interpretability

‘Most companies have good intentions with AI. But if a model lacks thorough validation, it's easy to end up with unintended consequences – with potentially serious ramifications for the end user’. 

Explainability here means being able to show your workings: what kind of datasets and sources go into an AI system to make it produce results? It’s also about understanding which input data had the biggest influence on the output – to understand why a model made a certain decision. This can either be achieved using an inherently explainable model, or applying explainability tools on top of the original model.   

There are different ways of defining Interpretability . One is linked to understanding how a decision is being made by an AI model. But it can also be about how you help the end user make sense of the result. 

But a true framework for AI responsibility goes even deeper than just that transparency layer. It can also foster an ‘AI for good’ mindset and processes that unlock better outcomes – for people, planet, and profit. 

So that’s explainability, interpretability, sustainability, and equality – all wrapped up under the banner of responsibility. But why? What makes formalising this approach such a necessity? 

Why is responsible AI so important? 

The short answer here is to break the why into two areas:  

1. Mitigating unintended consequences 

Most companies have good intentions in mind with AI. But if a model lacks thorough validation, and careful risk analysis and mitigation, it's easy to end up with unintended consequences. This can have serious ramifications to the end user. That’s why it’s essential to invest time and resources in understanding all the possible risks a product opens up. 

Take the Instagram algorithm, for example, which served weight loss messages to teenagers, or the Apple Credit Card vetting system, which unwittingly discriminated against women.  

With the latter example, even removing gender from the data pool wasn’t enough of a safeguard. That’s because, if you just remove gender from the AI model’s training, these models can still detect gender in other behaviours like shopping habits, and make biased judgements based on that.  

That’s obviously very problematic, but it’s a great example of why you need explainable AI – so you can understand why the technology is making its decisions

2. Adapting to regulatory changes 

Another part of the responsibility pie is rapidly approaching in the form of sweeping legislative changes and AI regulation.

The recently tabled EU AI Act seeks to codify requirements around how AI can be deployed in consumer-facing products – with a focus on data openness and banning practices it sees as posing ‘unacceptable risk’, like AI-powered social scoring and behavioural manipulation. 

Crucially, the AI Act has the ability to set the tone globally – much like GDPR before it. And that's a good thing. Up until now the AI space has been a bit of a Wild West. 

‘The AI Act seeks to codify responsible AI requirements. Until now, the AI space has been a bit of a Wild West’. 

Failure to comply with regulatory changes will obviously have legal ramifications for any business doing business with the EU, but there is also a third, more cutthroat incentive than even that: avoiding monetary losses

Even outside the concept of responsibility, a badly developed AI can have quite devastating consequences if it presents errors or makes decisions that result in reputational damages. As any business knows: if you do things wrong, it can lead to costly lawsuits. 

So how do you codify and put responsible AI thinking into practice? The answer lies in planning ahead, rather than racing forwards and trying to course-correct when it’s already too late…

Put thinking into practice with our responsible AI framework

At Zühlke, we’ve been helping clients cement responsible AI practices by developing a four-part responsible AI framework that formalises the process – from the inception stage onwards.  

Here’s how it breaks down:

  • The human layer...

    The human-centred AI or ‘human layer’ asks some pretty fundamental questions about whatever AI use case is being ideated: is this going to be good for people? What will the consequences be? What potential risks are there?  

    The human layer is really about thinking in terms broader than an engineer typically might. It’s about asking should we even be doing this? And, if we find that there could be some risk of harm, what can we do to mitigate things?  

    That doesn’t make every project an automatic no-go. You can find risks with almost any product if it's being misused, or if it's being fed the wrong data. But it’s your obligation to mitigate those risks. You just have to look at the stakeholders as well as the people that are affected by the AI in question.  

    For instance, if your AI product is deciding who should be approved for a credit card, the bank is the stakeholder, but the end client is the one that's being affected by it. So you have to consider every human facet of your project and be respectful towards all of them

    This human element stretches far beyond the initial stages of an AI project, however. It also means adding a layer of human oversight that can supervise, spot errors, correct things, and take any additional steps to stop things from going off course again. You need to have a effective mechanism in place in case things start going south. 

  • The data layer...

    The data layer is where openness and ethical practices come in. It’s about ensuring that the data you’re putting into an AI model or product has been ethically sourced, that there’s transparency from every angle, and that it’s accurate.  

    That can be a tall order when you’re piggybacking on huge, openly available models that scrape the whole internet, but it’s easier when you’re building AI models yourself. If you're building a smaller system, you really want to know what data has gone into your model, how it’s shaping things, and if that data is representative of the problem at hand. 

    In practice, that means data auditing – with a watchful eye on biases and other non-obvious influences your dataset might have. For example, someone’s postcode isn’t representative of their income, but data proxies and confounders might misguide an AI model into thinking it is. 

  • The model layer...

    The model layer revolves around explainability and validation: you need to be able to explain the behaviour of any AI model you use, and to perform balanced validation alongside continuous monitoring of its performance. You can do validation without explainability, but it’s important that you do both. 

    If you’re leveraging a huge generative AI model, you’re effectively taking that model off the shelf, so you have no control of the data. But what you can do is be very mindful of the data you choose to validate, how you think through all the edge cases, and how you explain them. You want to favour algorithms that are more explainable, and where you can control the input and output space. 

    Or, in other words, you need to be able to explain how and why things work, and clearly interpret whatever results are being delivered. Ultimately, that means that there’s responsibility baked right into the choice of which AI model you want to use. 

  • The sustainability layer...

    Not every AI use case has to be inherently great for the planet, but there are ways in which AI stewardship can be proactively handled from an environmental standpoint. 

    These systems use huge amounts of energy. Globally, data centres consume around 3% of the global electric supply, accounting for 2% of total greenhouse gas emissions. That might not sound like a lot, but it’s roughly the equivalent of the entire airline industry. This imbues AI use with some significant considerations for sustainability. 

    That lends itself to some obvious best practices, such as not retraining models any more than you need to, and opting for a green cloud provider. These are decisions that should be thought about upfront and on an ongoing basis, rather than as an afterthought. 

Ensure your framework fits within your existing workflows

At every layer, it's all about putting the right processes in place. 

The good news is that this doesn’t need to be transformative or prohibitive. In fact, our responsible AI framework should slot right alongside any modern business’ ethical data workflows.  

These checklists and processes should map against the way you're working already. Nobody wants to adopt a framework that’s just a bunch of hurdles and complications.  

So these processes need to be implemented in a way that facilitates high-quality outputs and actually make it easier for the data engineers to work.  

Because, otherwise, being responsible with AI is just something extra that needs to be done, rather than a mindset shift that enables smarter, more futureproof projects from the outset

Speak to us today about how our ISO-accredited strategists, scientists, and engineers can help you create new value at scale with the right data strategy and responsible AI solutions