Data Strategy

How to develop responsible AI

an eye with connected points on it
  • A responsible approach towards AI and ML is key
  • Zühlke has developed an ethical AI framework for this purpose
  • This framework identifies the applications potential risks and harms at different levels
5 minutes to read
Author

Innovative solutions based on artificial intelligence (AI) and machine learning (ML) offer great potential in various applications - especially in the fields of health and sustainability. However, a responsible approach to the new technological possibilities is crucial.

At Zühlke, we believe that artificial intelligence (AI) will have a positive impact on our society and environment. Innovative AI solutions in medicine can improve patient outcomes by improving diagnosis and liberating healthcare professional’s time by automatizing of administration, allowing them to spend more time with their patients. In other fields, such as in sustainability, AI can also play a pivotal role in reducing energy consumption and CO2 emissions by optimizing transport networks and monitoring deforestation, just to name a few examples. It can be used to automatize tedious tasks in all industries and thus improving working conditions for employees across all sectors.

The honeymoon is over

AI and machine learning has been seen like a magical cure for everything, but we have started to see a backlash of some of the first large-scale implementations. We have heard about how Instagram’s algorithms have started recommending dieting content to young teenagers, how AI based hiring tools discriminate against women, how hundreds of AI based applications were built to detect covid – but few of them helped. The list can be made infinitely long.

Regulation does not have to impede innovation

Such failures can be avoided with the right approaches. Developing Responsible and Trustworthy AI is not only a good idea to prevent reputational damages and costs due to reduced revenue and lost customers. It will also soon become a legal requirement - already in April 2021, EU published their intentions to regulate AI on their official website. Although many fears that regulating AI will slow down innovation, our experiences made in medical AI, a field that has long been subject to regulations, clearly shows that such hurdles can be overcome with the right frameworks. Rather than being an impediment for innovation, they promote high quality solutions by providing checklists ensuring that Good Machine Learning Practice is being followed along the full development process and preventing common pitfalls for unintended future consequences. Although the new regulations have not yet been set, the first communications from the European Union indicate that they will follow a similar pattern as the medical regulations and will be centered around risk evaluation and risk mitigation.

Download the framework

The ethical AI framework

Responsible AI covers ethical aspects, requirements on interpretability and sustainability along the whole development chain - from the initial decision to start the development of an application, to the user’s interaction with the final product. Based on a long experience of developing machine learning applications within the medically regulated field, Zühlke has transferred and expanded our learnings to develop frameworks for both ethical and interpretable AI. The frameworks cover multiple layers of the development process - the data, the model, and the human interaction. These are hands-on frameworks that clearly indicate which considerations are needed at every phase of the implementation and distributes roles and responsibilities.

Just like for medically regulated products, the ethical framework is centered around an assessment of the potential risks and harm that could be inflicted by the application. Based on the level of risk, it is then decided which mitigations are necessary to proceed with the development. The framework ensures that developed AI applications are compliant and aligned with corporate values and reduces the risk of unintended consequences. 

The layers of the framework

The human interaction layer
In summary, the human interaction layer covers clear communication towards the user with regards to the limitations of the data...

In summary, the human interaction layer covers clear communication towards the user with regards to the limitations of the data and the model, transparency, and respectful interactions with users independent on their origin, gender or religious beliefs, and a continuous monitoring of the applications behavior over time.

The data layer
The data layer covers privacy, ethical data collection, how to deal with biases - that often mirrors prejudices in the real world,...

The data layer covers privacy, ethical data collection, how to deal with biases - that often mirrors prejudices in the real world, i.e. an engineer must be a man - , proxies - which are non-obvious biases, i.e. postcodes being a proxy for income - , and hidden confounders – which are the non-obvious influences the dataset might have. The data layer also provides clarity on the consequences for the user to share their data and impact of long-term data shifts on the model.

The model layer
Regarding the model layer, we underline the importance of the choice of a metric that cannot mislead the end-user about the real...

Regarding the model layer, we underline the importance of the choice of a metric that cannot mislead the end-user about the real performance, the necessity of explaining the model behavior and to perform and inclusive and balanced validation and verification, as well as a continuous monitoring of the model performance.

The sustainability layer
The sustainability layer explains how to minimize the computational load and options to choose a green cloud provider.

The sustainability layer explains how to minimize the computational load and options to choose a green cloud provider.

The Interpretable AI framework

Working with Interpretable AI is a prerequisite to ensure the compliance of the final application. This framework is therefore an integrated part of the Ethical framework but can also be used on its own.

The human centric layer of the framework covers clarity towards the user on the added value of the application and clarifies the need of explanations. It creates a human interpretable explanation of the model behavior and ensures transparency toward the end user.

In the data layer, the focus is on the understanding of the origin of the data. So does the plausibility of the selected feature and analysis of the confounders. It is also about creating clarity on what data is needed to be provided by the user and the consequences of sharing those data.

The model track covers the need of transparent model versus explainable models. It also analyses the fidelity and consistency of the explanation.

How to implement the frameworks

In order to create the desired impact, it is important that the framework becomes an integrated part of the work culture. This is reached by implementing comprehensible and easy to follow processes within the organization. Having an infrastructure and culture that favors transparency, a diverse working force that encourages an open communication and continuous training of its employees are other factors paving the way to a successful implementation. Last, but not least, a proper documentation of all design decisions made in the development process are necessary to ensure transparency and that all potential pitfalls have been considered.

lisa Falco
Contact person for Switzerland

Dr. Lisa Falco

Lead Data Consultant

Lisa Falco is passionate about the positive impact that AI and machine learning can bring to society. She has more than 15 years of industry experience working in medical applications of data science and has helped bringing several AI driven MedTech products to market. Lisa has a PhD from EPFL, Switzerland, in Biomedical Image Analysis and an MSc in Engineering Physics from Chalmers, Sweden.  

Contact