AI in Pharmaceuticals and Life Science is showing a huge amount of promise, but the lack of clear guidelines means that patient data can still be at risk. We’ve shared 5 principles that we believe are key to applying AI in Pharma and Life Science in a safe and controlled way.
AI for pharma and Life Science: Unchartered territory
It’s taken just under five years for artificial intelligence to evolve from the first admitted medical device to becoming a common industry buzzword.
Within the last two years, more than a third of Healthcare provider executives have said they’re investing in AI, machine learning, and predictive analytics. This year, more than two thousand exabytes of patient data is predicted to be generated (more than an 11,000% increase on data generated in 2013). This is undoubtedly a big, exciting and pivotal time for companies across Life Science and Pharma, looking to apply artificial intelligence in new areas and deliver innovative, effective and personalised solutions to patients.
But while the hype around AI in Pharma and Life Science is more abundant than it’s ever been, many businesses are starting to realise that the reality of applying AI is, in fact, fraught with challenges. One of the biggest of these challenges being the lack of concrete guidelines in place to ensure companies know exactly how to navigate the Healthcare industry’s uniquely complex regulatory landscape.
Why is it so complex? Because while AI and Pharma seem like a perfect match, as do AI and Life Science (for example, because a lot of the data involved is image or text data where machine learning has made tremendous advancement in recent years), there are potentially catastrophic consequences to poor data management and utilisation that simply don’t exist in non-regulated environments.
Unlike most other applications of machine learning, mistakes made by automated systems in the medical field can directly harm people. For example – while a mistake in an algorithm that predicts whether or not a user will click on an online advertisement will most likely end in a small wasted monetary cost, a wrong diagnosis (and subsequent therapy) could, in the worst-case, lead to the death of a patient.
As one of the few organisations responsible for the admission of medical devices, the FDA has published various documents discussing how it plans to handle AI solutions in the future, as well as a precertification program for companies planning to submit medical AI solutions for admittance. However, as of today no concrete norms or guidelines for the development of AI devices in the medical field exist, which has naturally led to a lot of uncertainty and insecurity among brands looking to make a move.
Additionally, many researchers and companies have little experience with the regulations for approving medical devices. So even though there are papers on AI in Healthcare published daily, there are only a few approved applications of AI in medical practice.
The question remains, then, as to how life science and pharmaceutical companies can take advantage of AI safely, and prove that they’re doing so, in order to avoid disaster, maintain their brand reputation and keep the trust of patients themselves.
When AI goes wrong
In 2019, racial bias was found to infect a commonly used healthcare algorithm in the US. It was identified that black patients were, on average, far less healthy than white patients assigned the same score. The result of this bias: only 17.7 percent of patients automatically identified for enrollment in a high-risk care management program were black; whereas without the algorithm’s bias, around 46.5 percent would have been black. The bias arose because the algorithm predicted health care costs rather than illness.
At Zühlke, we’ve helped many companies implement AI safely and effectively over the years, while navigating the Life Science and Pharmaceutical industries’ complex regulatory landscapes. Rather than keeping all the knowledge and experience we’ve amassed to ourselves, we’ve decided to share some of our secrets: five principles that we believe every business should be following if they want to be successful with AI in Pharma and Life Science.
The five principles of applying AI in Pharma and Life Science
- Good machine learning practices
Hopefully, this one goes without saying. And if your business is already accomplished in machine learning, and you have good data scientists on-hand, then they should be delivering on this objective already. But if not, ‘good machine learning practices’ are mostly about properly running experiments to make sure that you can trust the output of your analysis, like making sure you’re documenting exactly what your business is doing with your data at every stage, ideally with the help of checklists that ensure the right people are doing exactly what they should be.
For example – if your teams are at the early stages of data-set creation, ‘good practices’ would involve things like having a specified data reference standard, which essentially means documenting who is labelling your data, what qualifications they have, which process was used, at which clinic the data was labelled, and so on. Properly splitting the data into training and test sets and fully analysing them is also an important step. Make sure they represent the patients that the machine learning model is going to be used on. And look out for hidden stratification like severe or harmless disease types that could mislead you into overestimating the performance of your model.
- Proper evaluation
This is about thoroughly evaluating your machine learning system (ideally as early as possible), and constantly testing at regular intervals. For example, make sure your teams are conducting robustness and sensitivity analyses, investigating mistakes, comparing to simple baseline methods and to human performance, and analysing explainability (more on this in principle #5). Evaluation of the full system performance, including the health care professionals that act on the output, is also crucial.
- Reproducible machine learning pipelines
This principle is basically about having a documented ‘recipe’ for all your code and data, either so you can prove that your system generates the same results consistently, or so you have the freedom to tweak specific variables under controlled circumstances – while still being able to re-run everything else as normal. It’s key for being able to demonstrate transparency.
- Risk management
There’s no way around this principle as it’s mandatory. It means that, regardless of the type or scope of AI initiative you implement, you’ll need a way of systematically listing all the risks that your solution poses – in the context of that solution’s intended use. You’ll then need to assign probabilities for each of the risks identified, and demonstrate that you’ve got mitigation protocols in place in case any of the risks materialise (although hopefully they won’t).
- Simple, transparent, interpretable models
A lot of machine learning systems are essentially black boxes. Meaning, it’s very hard to get a full picture of what’s going on in a certain system or model. Unfortunately, this means that when something goes wrong, you’ll very likely be faced with the prospect of having to explain a bad result without the visibility you need to identify its root cause. In fact, even when your algorithm makes correct predictions, the doctor and patient will still want to know (and have the right to know) how the predictions came about: what symptoms and patient information led the system to make a certain diagnosis? Luckily there are methods that can be used to show what factors d a model to get different diagnosis. So you can use more complex modern deep learning algorithms also in a medical setting.
Beyond the data and algorithms
So the majority of those five principles were about how your business is handling the data itself, which will ultimately be the responsibility of your data or machine learning teams.
But you may still be wondering what you, personally (if you’re not a data scientist), can do in this space to increase your chances of a successful AI initiative. For example – as a key decision maker, business leader or driver of digital transformation, what can you contribute to ensure progress? And the answer is: a lot.
First of all, if you want to move fast, and stand the least chance of failure with AI, the best possible thing you can do is to reach out to an expert who knows exactly how to navigate this space, and how to identify where the pitfalls might be for your particular business. A partner who’s worked with a wide range of Life Science and Pharmaceutical companies, who knows not just how to accelerate development of your products and solutions in an AI context, but how to bring them to market successfully.
Second, know that what the industry needs most in order for the AI field to develop and add more value to businesses (and ultimately to patients), is early adopters. Even despite the current uncertainties around regulations, early adopters are likely to benefit from a very rewarding head start if they can be among the first companies laying the foundations for future projects. So if you’re a driver of change or digital transformation in your business, try to encourage your teams to understand the value of doing this now rather than later.
Finally, know that despite our enthusiasm for AI in Pharma and Life Science here at Zühlke, we definitely don’t advocate applying AI in all contexts. For example, if you fully understand a phenomenon, but have little labeled data available for training and get good results with hand-crafted rules, there’s little reason to use machine learning.
Your next best moves
AI has the potential to fundamentally transform Healthcare.
But for AI in Pharma and Life Science to succeed at scale in the Pharma and Life Science spaces, it’s critical that regulating authorities provide the needed guidelines and boundaries for AI’s application in regulated products sooner rather than later. In the meantime, however, there are definitely smart moves that businesses who want to get in early should be making.
The best thing you can do right now is reach out to a partner who can guide you every step of the way, and ensure you have all the knowledge and tools you need to be successful with AI.
Bardia M. Zanganeh
Bardia M. Zanganeh is responsible for the Life Sciences and Healthcare practice in Switzerland. He serves leading healthcare institutions on all technology agenda issues. His primary areas of focus include digital innovation, business model transformation and product innovation. He also serves providers as well as medical technology and pharmaceutical companies. He has a background in engineering, consulting and entrepreneurship and is a lecturer at the University of Applied Sciences in Business Administration in Zurich.