• Skip to homepage
  • Skip to main content
  • Skip to main navigation
  • Skip to meta navigation
Zühlke - zur Startseite
  • Business
  • Careers
  • Events
  • About us

Language navigation. The current language is english

  • Expertise
    • AI implementation
    • Cloud
    • Cybersecurity
    • Data solutions
    • DevOps
    • Digital strategy
    • Experience design
    • Hardware engineering
    • Managed services
    • Software engineering
    • Sustainability transformation
    Explore our expertise

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Industries
    • Banking
    • Insurance
    • Healthcare providers
    • MedTech
    • Pharma
    • Industrial sector
    • Commerce & retail
    • Energy & utilities
    • Government & public sector
    • Transport
    Explore our industries

    Subscribe to receive the latest news, event invitations & more!

    Sign up here
  • Case studies

    Spotlight case studies

    • Global Research Platforms and Zühlke are fighting Alzheimer's disease
    • Brückner Maschinenbau leverages GenAI to optimise efficiency by improving master data management
    • UNIQA: AI chatbot increases efficiency – 95% accuracy with half the effort
    Explore more case studies

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Insights

    Spotlight insights

    • AI in the industrial value chain
    • How to master cloud sovereignty with risk-based strategies
    • Tech Tomorrow Podcast
    Explore more insights

    Highlight Insight

    AI adoption: Rethinking time and purpose in the workplace

    Learn more
  • Academy
  • Contact
    • Austria
    • Bulgaria
    • Germany
    • Hong Kong
    • Portugal
    • Serbia
    • Singapore
    • Switzerland
    • United Kingdom
    • Vietnam

    Subscribe to receive the latest news, event invitations & more!

    Sign up here
Zühlke - zur Startseite
  • Business
  • Careers
  • Events
  • About us
  • Expertise
    • AI implementation
    • Cloud
    • Cybersecurity
    • Data solutions
    • DevOps
    • Digital strategy
    • Experience design
    • Hardware engineering
    • Managed services
    • Software engineering
    • Sustainability transformation
    Explore our expertise

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Industries
    • Banking
    • Insurance
    • Healthcare providers
    • MedTech
    • Pharma
    • Industrial sector
    • Commerce & retail
    • Energy & utilities
    • Government & public sector
    • Transport
    Explore our industries

    Subscribe to receive the latest news, event invitations & more!

    Sign up here
  • Case studies

    Spotlight case studies

    • Global Research Platforms and Zühlke are fighting Alzheimer's disease
    • Brückner Maschinenbau leverages GenAI to optimise efficiency by improving master data management
    • UNIQA: AI chatbot increases efficiency – 95% accuracy with half the effort
    Explore more case studies

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Insights

    Spotlight insights

    • AI in the industrial value chain
    • How to master cloud sovereignty with risk-based strategies
    • Tech Tomorrow Podcast
    Explore more insights

    Highlight Insight

    AI adoption: Rethinking time and purpose in the workplace

    Learn more
  • Academy
  • Contact
    • Austria
    • Bulgaria
    • Germany
    • Hong Kong
    • Portugal
    • Serbia
    • Singapore
    • Switzerland
    • United Kingdom
    • Vietnam

    Subscribe to receive the latest news, event invitations & more!

    Sign up here

Language navigation. The current language is english

Tech Tomorrow Podcast

Transcript: Will AI and digital twins make animal testing in drug discovery obsolete?

Read the full transcript for the Tech Tomorrow podcast fifth episode: Will AI and digital twins make animal testing in drug discovery obsolete with Professor Julie Frearson.

DAVID ELLIMAN

Hello and welcome to Tech Tomorrow. I’m David Elliman, Chief of Software Engineering at Zühlke. Each episode, we tackle a big question to help you make sense of the fast-changing world of emerging tech.

Today, I’m joined by Professor Julie Frearson, SVP and Chief Scientific Officer at Charles River Laboratories — a company providing products and services to support drug discovery. She leads the company’s strategic venture funds and innovation partnerships and brings extensive experience in early-stage drug discovery.

Today, she’s here to help me answer the question: Will AI and digital twins make animal testing in drug discovery obsolete?

Before we begin, let’s clarify what a digital twin is. In short, it’s a virtual version of a real-world object or system. Take BMW — they built a virtual factory to test sustainability, digitalisation, and other metrics before implementing them in the real world.

With that in mind, let’s get started.

So, if we consider that technology, AI, digital twins, et cetera, is in some sort of transitional state. There's like a patchwork, I should imagine, in your industry, where some things are more effective than others, where some things are more suitable to be applied than others. and I'm guessing that the direction of travel is that over time that patchwork becomes larger. Could you describe any areas already that's changed in how we discover or test new treatments?

PROFESSOR JULIE FREARSON

In small molecules, there are AI algorithms that operate to help us identify new chemical entities, prove that they are binding to the right targets. They give us a read on the physical chemical properties of those small molecules; they give us a read on the sort of drug likeness of small molecules.

So, what that's allowed us to do is sort of in the computer, you end up doing much bigger experiments and interrogating much more chemical space than you would if this was entirely reliant upon traditional experimental processes. So, the experiments are bigger, and then you like to think that because you're able to profile these small molecules virtually, you are likely to get to the right combination of properties with, I guess, increased speed, and potentially lower cost. Although cost is always a question that I get asked about AI deployment, I'm not yet convinced that there's a cost benefit to deploying AI into drug discovery.

I think definitely, real progress made tangible benefit evident in the sort of early-stage drug discovery when you're designing therapeutics.

DAVID ELLIMAN

And what about animal testing? Have you seen any progress on that front?

PROFESSOR JULIE FREARSON

The other areas where I see real benefit emerging - maybe I wouldn't say that's yet fully proven, but definitely emerging - is in the concept of digital twins, and in this case the concept of virtual animals.

And I'm not sure we quite ready to have virtual animals that can show us the difference between an untreated and a treated animal from a toxicological signal perspective. But where we are definitely making progress and having tangible benefit is where we are using virtual animals to replace control animals in studies.

So, when you do a study, you have control arms, you have treated arms. And if you think about it, the control arms are always the same, right? Every time you run a study, it's the same control arm. You have to think about, you know, different species, but fundamentally, a ton of data pre-exists, so why keep doing the same thing? It's a wonderful opportunity to push forward the science of virtual animals.

We've done that in terms of control animals for pharmacology studies where we're evaluating a potential oncology drug. We've been able to develop algorithms that allow us to reduce the number of animals in the control arm of these PDX studies. We've also been developing a safety assessment equivalent virtual animal, and that's allowed us to take those control arms and at least reduce the size of them today.

When we've developed these virtual animals, we've found methodologies that work that allow us to be able to pick individual animals from our data history and use them instead of the live equivalent. We've shown through, I think, maybe 20 plus retrospective analyses that using a virtual animal as a control has no impact on the conclusions that you derive from the overall experiment.

DAVID ELLIMAN

We sometimes find that executives can get things wrong about digital twins in terms of their accuracy, and the biggest misconception is that a digital twin is not a crystal ball. Executives often think that if you've got a sophisticated model, you can predict anything, but digital twins are fundamentally backward looking.

They're built on historical patterns. They're brilliant at showing you the what if scenarios based on what you've already know or what you've done, but they can't predict black swan events or paradigm shifts. The other trap is assuming one digital twin fits all scenarios. In reality, you need different models for different questions, and that's often more complex than people expect.

So, what steps can you take to ensure the forecasting and decision making can be as effective as it can be? First you have to treat the digital twin as a decision support tool and not an autopilot.

Always pair them with human judgment and domain expertise. Second, invest in continuous validation. Regularly. Test your twin's predictions against real outcomes and update your models accordingly. Third, big explicit about the boundaries of your model. What assumptions are baked in? What edge cases aren't covered.

And finally, use ensemble approaches. Don't rely on a single digital twin. Combine multiple models include scenario planning for unprecedented events, and always maintain scepticism. The moment you stop questioning your model is when it becomes dangerous.

Back in the world of drug discovery, it's important to remember that we can't create complete virtual animals, but we can at least recreate parts of them.

PROFESSOR JULIE FREARSON

So, what's possible today is the idea that we physically have a lot of animal data that describes their behaviour, their body weight, the pathology of their organs, what normal looks like, right? We have that data. And so, there's really very little purpose to replicating that data over and over again. It's different to the idea of building a sort of entirely accurate representation of the biology of a molecular, a tissue, and an organ level. And I think that's genuinely way off where we are today in terms of feasibility. When you think about what matters when it comes to understanding the safety of a potential therapeutic, you would essentially blow everyone's mind if you really tried to build a virtual human or a virtual animal based upon the availability of data today and our fundamental understanding of biology. So we are much more likely to go at it piecemeal, right?

So we're much more likely to... If we think about the safety assessment context, what are the toxicities that routinely causes problems when we put new therapeutics into humans, and by inference, I'm saying what are the things that are current workflows with animals? Missed on a consistent basis, right?

And if you take it from that perspective, you think about things like liver injury, cardiac consequences, kidney injury. If you're talking about biologic therapeutics like antibodies, you're talking about the ability for the human to generate their own antibodies against the drug you've just put into them, right?

So there's some very classic problems in terms of toxicity you see in humans that are either not able to be defined in animals, or are missed by animals.  

You're not hitting the whole human, you are breaking it up into bite-sized pieces and you're thinking, well, let me understand: ‘Can I build a model that represents the liver? And can I therefore start to understand whether a molecule is going to cause me problems in humans?’. And that's what I see happening.  

DAVID ELLIMAN

I'm thinking about this from the regulator's point of view, you know, in terms of balancing the full knowledge of risk.

Because it's a growing and changing world that you are increasing the amount that you are able to model about any subsystem or interconnection of systems, that it's going to be a constantly moving target. So, I would imagine the regulators are going to face some tough challenges accepting AI driven models as proof of safety.

PROFESSOR JULIE FREARSON

I think so, and I think, one of the challenges we've got, even though the FDA's opened the door on this by the, by the way, the EMA have always been in favour of using NAMS, where scientifically appropriate.

DAVID ELLIMAN

In this context, NAM stands for New Approach Methodologies. This includes novel approaches like the ones we've been talking about: AI, machine learning, and digital twins.

PROFESSOR JULIE FREARSON

There's still some fundamental challenges in getting them incorporated into everybody's routine workflows, and then the, the regulators being able to interpret the outcomes of those models with confidence.

So, I think a couple of things that I should make clear is I personally think the use of these models is going to be much more focused on what we call decision making in drug discovery and on clinical development. And what that means is we use those models to help us decide on what the best candidates are going to be at the end of the day.

And the thing I talked there about liver, you might find liver injury and phase one or phase two of a human clinical trial with no particular indicator in the animal model. Now I think you've got an opportunity to be picking that up during the discovery phase, right?

I still think the regulators will be looking at data sets that still include in vivo experimentation even 10 years from now.

So today for you to be able to get something through an IND submission, there is a suite of very prescribed in vivo experiments that need to be done and they're prescribed by law.  

There are examples even today where you can proceed through an IND submission with relatively little in vivo data.

And there are very specific use cases and they're use cases where you've got, um, a life-threatening oncology situation, or you have a situation where your target of interest is simply not expressed in preclinical models, or it's expressed in such a differential way from the human, it makes no scientific sense to use those preclinical models.

So that happens today, but it's the minority of cases.  

I see that transitioning to a future where regulators will become comfortable with hybrid data sets. So, you will still have in vivo in there, you will have.AI derived predictions in your data set.

DAVID ELLIMAN

What does the timeline look like for all this?

PROFESSOR JULIE FREARSON

The timing around that is open to question, and the reason it's open to question is because you have to change legislative guidelines. This is by law, right? So that's a long process. But to the point you were making about trusting AI, there's a lot of validation work that needs to be done to both prove out the in vitro systems and the AI systems.

And I think some of the challenges around AI lean into the potential for bias in the system, both chemical bias or, protein structure bias or actually ethnic and geographic bias when you start talking about clinical population data. So, all of that bias is going to have to be addressed and controlled for.

And then I think the explainability of models is another challenge for everyone, not just the regulators.

DAVID ELLIMAN

So, why is explainability so important when using AI and digital twins? Explainability is critical for three reasons.

First, trust. If you can't explain why your AI made a decision, how can you trust it to make important choices?

Second, accountability. When things go wrong — and they will — you need to understand what happened to prevent it recurring.

And third, refinement. If you don't understand how your model works, you can't improve it systematically. In regulated industries or high stakes decisions, explainability isn't just a nice to have; it's mandatory. You simply cannot deploy black box systems where people's lives or major investments are at stake.

So, beyond improving the AI systems, the steps that executives can take to enhance the explainability of AI, start by building explainability into your governance from day one. Create clear documentation standards that force teams to articulate how their models work and what their limitations are.

You have to start by building explainability into your governance from day one. Create clear documentation standards that force teams to articulate how their models work and what their limitations are.

Establish review boards with diverse expertise, not just data science, but domain experts who can challenge them and crucially create a culture where it's okay to say, ‘I don’t know what the model just did’, rather than retrospectively justifying questionable decisions.

And of course, as much of this that can be automated into testing, and testing frameworks around that, is always going to help you.

The bottom line is that right now in business and drug discovery, we still need a human in the loop.

PROFESSOR JULIE FREARSON

You know, the theme of this chat has been complexity, right? We're aiming to model incredibly complex systems. So that's going to require chains of models, but I think you're always going to need a human in the centre, to sort of quality control the outcomes and drive decisions.

I think if you dehumanize this process, we'd be going down a very dangerous track. My belief is that the human will always be in the middle of using chains of models, constantly quality controlling, and ultimately overriding the final decision.

Now, the good news is that the FDA have a path for in silico model, evaluation and approval. There are computational models that have been approved by the FDA and other agencies. So, there is a sort of precedent here. I do believe though that the path to getting there is still being formed for drug discovery.  

DAVID ELLIMAN

So, I was wondering as you were speaking, that there is a potential perception that, over time, the patchwork, as we talked about earlier, of the application of different AI instances, different points in whatever process we're discussing, starts to become more effective.  

And I wondered if there's a tension there. There's obviously the speed of technology, that outpaces the speed of regulation... I mean, that's true of all industries, that's a challenge, and particularly now with AI, that everybody faces. But is there an additional kind of ethical pressure on this industry specifically because there is an expectation to reduce the amount of animal testing?

So that comes in from the point of view of, an external expectation, and of course, it's not as easy as all that. Is there going to be a potential outpacing the potential to actually apply technology more than the regulators allow?

PROFESSOR JULIE FREARSON

My view of this though is that that's okay because you can derive a huge amount of benefit of that technological clock speed in the earlier parts of drug discovery. Where the regulators are less engaged, don't really have a role to play in all that decision making...

We can go wild there, right? As long as we don’t go crazy. But I feel that there is a part of the paradigm here that can absorb all of that technological advancement, take advantage of it, and can dramatically change the quality of the candidates that you put into the regulatory phase.

So, to the extent that the regulators can't keep pace, maybe it's not as big a problem as you might expect. At the end of the day, I still think they're going to want to see a hybrid portfolio of data to prove to ourselves that this will be safe in humans and safety in humans has to be the sort of lighthouse paradigm setting priority here.

Nobody wants us to lapse into a future where we get so overconfident with the systems that we're using to define whether or not a drug is going to be useful and safe, that we remove the safety net that we currently have in place. And I think I see it as a safety net.

And I go back to the idea that... I think the regulators will be open to new data types, but there will always be a theme of in vivo, it's the safety net that convinces us that when you put something in an animal, you understand where it goes, you understand where it accumulates, and you get the systemic implications of administering that drug.

DAVID ELLIMAN

So Julie, given everything we've spoken about, do you think AI and digital twins will make an animal testing in drug discovery obsolete?

PROFESSOR JULIE FREARSON

In my view is, the answer is no. The primary reason is that when you're thinking about, um, the challenges you have of developing new therapeutics and making sure they're both effective at addressing the disease in question and of paramount importance is that they're going to be safe.

I think it's very difficult to anticipate something as complex as that being addressed purely through computational modelling and prediction in the future.

Will the balance between in silico approaches and experimental approaches change? Absolutely. And I see that balance changing over the next decade.

Why is it going to take a long time and why will it not be a complete transformation to computation and predictive modelling? Because we simply don't have all of the technology we need today to be able to predict every element of the complex biology we're talking about.  

There's a sort of working theory that we know about 5% of human biology. So almost immediately you can see a challenge in our ability to simulate something we don't fully understand yet.  

DAVID ELLIMAN

So, will AI and digital twins make animal testing in drug discovery obsolete? Well, clearly, we've heard that the answer is no. And in the other conversations that we've had, we've noticed that digital twins and AI are finding their place in certain parts of the process, and those parts will extend, elaborate as time goes on.

And maybe some of those things will join together over time and parts of the process will be effectively virtualised, so we might be able to create richer digital twins, either of cell or organ interaction.

But the ecosystem or the body of an animal is so incredibly complex. We need systems that are way more capable than they are at the moment. So it's really valuable and it's viable work, but we've still got some way to go before we start eliminating animal testing.

Thanks for listening to Tech Tomorrow brought to you by Zühlke. If you want to know more about what we do, you can find links to our website and more resources in this episode show notes. Until next time. 

Get to know us

  • About us
  • Impact & commitments
  • Facts & figures
  • Careers
  • Event Hub
  • Insights Hub
  • News sign-up

Working with us

  • Our expertise
  • Our industries
  • Case studies
  • Partner ecosystem
  • Training Academy
  • Contact us

Legal

  • Privacy policy
  • Cookie policy
  • Legal notice
  • Modern slavery statement
  • Imprint

Request for proposal

We appreciate your interest in working with us. Please send us your request for proposal and we will contact you shortly.

Request for proposal
© 2025 Zühlke Engineering AG

Follow us

  • External Link to Zühlke LinkedIn Page
  • External Link to Zühlke Facebook Page
  • External Link to Zühlke Instagram Page
  • External Link to Zühlke YouTube Page

Language navigation. The current language is english