Tech Tomorrow Podcast
Transcript: Will digital come before biology in biotech’s next leap?
DAVID ELLIMAN
Hello and welcome to Tech Tomorrow. I'm David Elliman, Chief of Software Engineering at Zühlke. Each episode we tackle a big question to help you make sense of the fast-changing world of emerging tech. Today I'm joined by Bibi Ephraim, head of Digital Sciences at Genentech, a pioneering biotech company that is widely regarded as one of the trailblazers in drug discovery and development.
Bibi Ephraim has been at Genentech for over eight years, but he began his career as a physical chemist. A chance encounter at a conference set him on a new path towards the technology that powers modern biotech.
Today, he's going to help me answer the question: ‘Will the next biotech breakthrough be digital before its biological?’.
In order to set the scene for what we're gonna discuss: when we talk about AI, data science, digital governance in biotech, what does the landscape actually look like to you?
BIBI EPHRAIM
If you were to ask me this question a few years back, I would say it's very simple. Things are moving slow; we're doing biomedical research, and that takes quite a lot of time.
But now things are moving extremely fast. The current landscape of AI, data science and data governance, and biotech is really marked by rapid innovation, institutional transformation and inclusion of ethical and regulatory challenges that is brought about by this new technology, as well as current biomedical technologies.
AI has moved from being just a supplementary or assisting tool to a driving force in biotechnology. It's dramatically reshaping how new therapies are discovered, developed, and delivered. AI is now the core of biotech technology, it's powering direct discovery, optimising clinical trials, and also even getting one of the most exciting areas for me personally, enabling personalised medicine or precision medicine.
DAVID ELLIMAN
I always hear about biotech pharmaceutical companies developing a lot of ideas that struggle to get out of the prototypical phase. You've had a long background in transformation, as well as its application in this space. The AI boom has accelerated on the one hand. Is there a lot of back-to-basics transformation that still needs to happen? I mean, my experience is from the finance sector, and there was a lot of data analytics work that was kind of sat on a bit of an island because the rest of the end-to-end experience had to catch up.
BIBI EPHRAIM
There's no question that AI is accelerating the landscape and the field. However, the point that you're making is extremely fundamental. There's no getting away from laying out the foundational pieces that need to be in place to be able to use AI. In fact, I would even venture out and say that AI is moving ahead further, and the traditional way of work or research is kind of following or being dragged along.
So, it's extremely important to have your data foundation layer in place. Otherwise, you won't be able to leverage any of the technologies that are available.
In my experience in some areas when I go and be asked to consult, I get asked, can you do ML or AI to fix this? And I look at the data that they have. Either it's inadequate or there isn't really a whole lot of data. So even though the perception and understanding of the potential of AI has increased, the level of preparation and readiness to use it still lacks behind the industry.
DAVID ELLIMAN
I think it might be interesting to dig into an example maybe of how data-driven approaches AI has already transformed, something sort of end-to-end or maybe developing treatments or something.
BIBI EPHRAIM
Some examples that come to mind are in the area of accelerating drug discovery.
This is an area where a number of companies are coming out and leveraging it. For example, Pfizer, where they had integrated AI and their COVID-19 treatment development, accelerating Paxil loft discovery and clinical evaluation, that really, really helped them.
The other one that comes to mind is AstraZeneca using AI collaborations with another company called BenevolentAI, where they're developing treatment for chronic kidney diseases and preliminary fibrosis. These are some of the specific examples, but generally, if you look at it, the areas where things are really moving are like I said, accelerating direct discovery, streamlining clinical trials.
This is one area where it's going to be fundamentally useful. Clinical trials are a major bottleneck in drug development. They're very expensive, they're time consuming, and often they fail due to potential issues that we have in patient recruitment, trial design, and so on.
There's also, along this line, and in clinical trials, real time monitoring that's enabled by AI. It's not the old way of once in a while going there and checking or periodically asking patients to come and to be checked and so on. This is real time trial monitoring that's being made possible.
One area that is really, I hold it near and dear to my heart, is precision medicine, where AI and data science are becoming very critical in the development of personalised treatments that are tailored to the patient's specific genetic makeup and need, as opposed to this generic development of drug or treatment, hoping that it delivers the value to all.
This holds quite a lot of potential, but it also comes with a significant amount of challenge.
It's not just a matter of developing the treatment or the drugs. It's also a matter of making sure that information is stitching all the players that need to be in place. There are patients, there are their physicians, there are the payers or the insurance companies. And there are the regulatory agencies. How do you tie all these stakeholders essentially passing on relevant information as and when it's needed, and also delivering and developing these precise medicines to particular people.
This is one area which I believe AI is going to significantly accelerate, making it a new paradigm going forward.
DAVID ELLIMAN
That's fascinating. I can see how it's becoming an optimizer to a number of existing parts of the process. Do you ever think that there might be complete parts of the process handed off? Like replacing drug animal or human trials, for example, purely into simulation. There's obviously a massive ethical thing with that, but do you think that's something that's even feasible?
BIBI EPHRAIM
That's going to be extremely challenging. If it was another field, if it was not healthcare, I would say, of course it's possible. But we're talking about an industry that is rightly risk averse. And there is pressure from regulators, and there's also fundamentally one of the challenge that we have in AI generally as a technology, which is the question of explainability.
How do we get from point A, the problem, to point B, the solution? What are the steps that AI took? Do we understand that? We have to be able to explain those things. This is a fundamental challenge, but my sense is just like everything else that we have seen in history, iteratively, we're going to get there and at some point we're going to have a strong enough technological understanding and explainability to be able to leave some of these areas completely to our tools. We're not there yet.
DAVID ELLIMAN
In my experience, there's cultural and organisational shifts that are critical to sustaining iterative innovation over time, and the biggest shift is moving from project to thinking to product thinking. Projects end, but products evolve forever. When teams own products long term, they naturally iterate because they live with the consequences of their decisions.
I've seen organisations completely restructure around this. Instead of project teams that disband after delivery, they have product teams that might exist for years. Amazon's two. Pizza teams own services indefinitely. The team that builds it, runs it, supports it, and evolves it based on customer needs.
And this creates a completely different mindset about quality and iteration. Another critical shift is breaking down the wall between business and tech. When engineers understand customer problems directly, not through three layers of translation, iteration becomes purposeful rather than random.
So, leaders need to create an environment that enables an iterative approach, and they need to be able to understand and implement the technology that facilitates that. And it starts with psychological safety. If people are afraid to fail, they won't iterate. They'll fail, and they'll over-engineer everything trying to get it perfect first time.
Leaders need to celebrate learning from failures as much as they celebrate success. At Google, they have failure parties where teams present what they thought didn't work and what they learned. Amazon's famous day one mentality means treating every day like you are figuring things out, not like you have all the answers.
And practically speaking, that means setting up short feedback loops, weekly demos, not quarterly reviews.
Executives often think of data in terms of algorithms and numbers, but how can we shift their mindset to see data as a product in its own right?
BIBI EPHRAIM
One of the most common mistakes that is being made by executives is to delegate data initiatives to solely their informatics or IT departments.
It needs to be treated as part and parcel of the business. It needs to be understood that this is something that's going to create value and the establishment of single sources of truth, investing in data governance, should be taken as a priority. There needs to be a data culture that needs to flourish within every enterprise.
So executives, more than at any other time in corporate history, need to set examples by acknowledging, not just acknowledging, but also allocating the right resources and giving them the right priority.
DAVID ELLIMAN
I think one obvious problem with making data essential and an asset to everybody is the quality of the data. So many organisations still struggle with the quality of the data, and some call it technical debt, some call it leadership blind spots. But how do you think fragmented or poor-quality data slows down the path and maybe, in your case with an example from something like molecular discovery or patient impact.
BIBI EPHRAIM
Data fragmentation is a huge, significant issue, not just in healthcare, but across the industry. I've seen a paper, published by Gartner, that on average organisations lose... I think the number is 12.9 million or thereabout every year because of poor data quality. This is a significant amount, and I would argue this is a very conservative amount from what I have seen across the board.
If you have low quality data... I'm aware of instances where companies had to pull back from releasing treatments or drugs because the quality of the data was poor. Not the quality of the research, but the quality of the data, so, regulatory agencies had to push back. That caused a number of issues, not just the cost, but also the cost to your reputation.
The other thing is the overall drug discovery timeline generally spans about 10 to 15 years or thereabout.
Data quality issues can significantly increase this, which means it also impacts your risk profile. The more time you take, the more potential you have to incur risk in terms of the research that you do, in terms of the significant Data issues that you might face.
If you had from the get go standards and processes that can take care of this, it could have been significantly shaving time. Some estimates are showing one to three years. This is not an insignificant time, considering pharmaceutical drug development.
DAVID ELLIMAN
I'm kind of interested to sort of extend the idea of maybe the fragmentation of the data in terms of the human element of collaboration. I think that we've seen, for example, in the pandemic, there was unprecedented collaboration and there was, at the time, unprecedented speed of development.
And there are reasons why companies keep data fragmented. People hoard data for the intent on protectionism in essence. But you've emphasized collaboration across teams and geographies in the past. So how does that kind of data sharing accelerate innovation in biotech?
BIBI EPHRAIM
Data sharing and collaboration, I would strongly argue, are not just nice to have in biotech. They're becoming extremely essential for accelerating innovation and overcoming immense challenges of the drive discovery process and the development process.
There are a few elements that we have to take into consideration. One is changing this mindset. Scientists often are trained to be independent and work independently. And there is also this human aspect of competition that comes into play. But more than anything else, what we're learning in the technology world is collaboration is really the key.
Where AI becomes extremely powerful is when you have the pieces become a whole and you can use that data collectively. There are certain silos that need to be broken. A single company's proprietary data is extremely valuable, of course, for the company, but it represents only a small slice of the total biological and clinical landscape.
When companies, academic institutions, nonprofits, et cetera, share data, they create a much larger, more diverse data set. They have a number of benefits. Of course, one of them, which is very obvious, is increased statistical power.
Essentially, the more data you have, the more likely that you can conduct meta-analysis and several statistically significant findings that may not be apparent in individual small trials. This is especially critical for rare diseases where no single organisation has enough patient data or to power a significantly productive study. There's also one area, which is new target and biomarker discovery, where AI and data science models thrive on large, diverse data sets.
When you feed this combined data from different sources, these models can identify subtle, unknown, hidden patterns, and it can identify correlations between genes, proteins, and diseases. This can lead to discovery of new drugs, or direct targets. It can also help identify biomarkers that may have been missed, and isolated data sets. When a dataset is shared, often researchers can replicate and validate the findings, strengthening the foundation on which they base their science, and also increasing the confidence of patients, investors and whatnot and the finding.
So, this is an extremely critical area for just driving new insights.
DAVID ELLIMAN
So, we need to consider pre-competitive collaboration. What does it mean and what does it look like in the world of software engineering? Pre-competitive collaboration is essentially competitors agreeing that certain foundational problems are everybody's problem.
I think there are some key benefits to pre-competitive collaboration. The biggest benefit is acceleration through the diversity of thought. When you get engineers from different companies working on the same problem, they bring different perspectives, different edge cases, and different approaches. The solution ends up being far more robust than anything one company could come up with a loan.
So how can leaders ensure this type of pre-competitive collaboration actually happens? And leaders need to clarify and clearly define the boundaries.
What's pre-competitive and what's your secret source? Be explicit about it. Create a clear intellectual property framework that defines what can be shared and what can't.
You need to invest real resources, not just good intentions. Assign your best people to these initiatives, not whoever's available.
Create a budget line. And make pre-competitive work. Make it someone's actual job, not a side project.
One area where we've seen real progress is in the adoption of mass standardized frameworks in Data and AI. Could biotech be reaching the same sort of tipping point?
BIBI EPHRAIM
This is one of the most exciting and essentially debated questions in modern science. And the answer is: absolutely. I believe this ties to the theme that you set for the podcast, which is, fundamentally, is biotech breakthrough going to be digital first?
And what we're seeing right now in many areas is yes, it is. The core of the shift is the recognition that biology is now essentially a form of information technology. DNA is a code. Proteins are complex machines built from copes; disease is, essentially you can treat it as a software bug and a biological system.
I think that the bottleneck in traditional biotech is the share time and cost of physical experimentation that needs to happen, the wet lab that we always talk about. And AI and digital tools are attacking this problem directly.
There are, right now, the concept of digital twins, where virtual experiments can be done. We're moving past traditional modeling, we're using these digital twins, to virtually represent patients, organs, or complex biological pathways.
Instead of running a real clinical trial with thousands of patients, you can do first these types of modeling and get insight out of it. There's also generative AI for molecular design, that is starting to be used. So, AI isn't just screening existing molecules, it's being used in models to design entirely new functional molecules from scratch.
DAVID ELLIMAN
I once heard somebody express that using AI for the early parts of drug discovery was a bit like, the previous method was a bit like looking for a needle in a haystack, and it's now a bit like having a metal detector for looking for the needle in the haystack.
It gives you the ability to accelerate. I'm kind of fascinated by the conversation about digital twins, and I've heard the dry lab term used and the combination between the wet and dry and how that end-to-end process is going to be some parts enhanced by some parts, potentially given more responsibility to parts of AI as the years go on.
And it's no doubt that we're going to have some sort of responsible approach to collaboration in biotech. It's going to include competitors, regulators, and even hospitals that are all going to have to work together. Do you see it scaling effectively?
BIBI EPHRAIM
Right now it's scaling. Obviously, some institutions, companies are far ahead. Others are lagging behind. But I have no doubt that this is the way to go and it will scale. I mean, let's look at it. The breakthrough that we're seeing, the digital breakthrough, it's not going to eliminate the biological component.
It's going to transform it into a role where high value validation and data generation are going to be the norm. And there's going to be a very good feedback loop that AI can enable. So, right now we're starting to see where a number of companies are realizing that unless they use digital tools, unless they use AI, unless they employ these new things, they're going to fall behind.
And they're accelerating their recruitment and training of people that can effectively help in this process. Whether or not it's going to scale, is not a question. The question is how long is it going to take for effectively to have the majority of this industry applying and truly using it in a way that's effective and efficient?
I think that's the question. And those companies that are going to be the first, and those companies that are moving ahead, are going to come out winners. Look at it this way: a few years ago, when you think of technology companies like Google, Apple, et cetera, they were not really players in the healthcare field. Today, they are.
In some areas truly the competitive landscape has changed as a result of technology companies entering the field. Look what Google is doing in terms of wearables. That is collecting a vast amount of data that's probably going to be extremely crucial for understanding the health status of individuals, and probably driving precision medicine forward.
So, scaling is not a question, it's just timing. But, it's very crucial that companies realize that they have to do it fast. They have to get their data foundation in place fast and be able to participate in this new arena, and achieve their business objectives.
DAVID ELLIMAN
So, Bibi Ephraim, considering everything that we've spoken about, will the next biotech breakthrough be digital before it's biological?
BIBI EPHRAIM
I really believe it's going to be digital. We're seeing it actually, where the digital aspect is predicting new ways and the biological aspect is really contributing and applying experiment and validating.
So, what I'm seeing is a very synergistic approach where things like digital twins, things like predictive models... You know, after the prediction, you have to go out and prove it, and in a wet lab. So where I see it is the digital, leading the digital pointing to a path and the biological side or the lab side, going in there and doing the experiment to validate.
I don't know, after a few years, we're going to rely less and less on the biological side. Already I can tell you from personal experience that one of the objectives of application of AI that I've been participating and doing was to minimize the number of physical experiments that were done, to be able to predict the behavior of molecules and drugs so that we don't have to go and do the same thing in the lab.
And we've been very successful across the board, so I've been fortunate enough to see this and participate.
DAVID ELLIMAN
And before I let you go, I'm a big fan of modeling digital twins simulation and so forth in the right place, in different fields from this. But one thing that's occurred to me is that sometimes there are happy accidents that might occur, like the discovery of penicillin was a happy accident. The microwave oven was a happy accident. Do you think that if we digitalize more and more end to end that we might not be in a position to have those happy accidents?
BIBI EPHRAIM
Talking purely from a statistical perspective, just imagine. How many models a machine can do in just a minute, an hour a day? It's going to be huge amount, more than we can do. So, it's more than likely that the machine is going to discover these accidents before us, just purely on a statistical basis.
One thing you can argue is: is this going to take the human out of the equation? To a certain degree, yes. But still, in medicine or in the health area, the question of explainability is huge.
So, we still need to understand what the machine is doing, but I don't believe that accidental discovery is going to be something that we're going to miss because we actually will have a systematic way of exploring multiple variations, multiple iterations in a relatively short period of time using these computers and machines than we can do as humans.
DAVID ELLIMAN
Thank you very much.
BIBI EPHRAIM
Pleasure.
DAVID ELLIMAN
So, will the next biotech breakthrough be digital before it's biological? Digital certainly moves very, very fast. We're certainly seeing digital enhancing the process for biological.
We talk about the presence of dry labs, or we talk about improving the outcomes at certain points in the process, and I think that's true of the way generative AI is affecting a lot of workflows. We're seeing existing processes, when they're well-defined, being able to be increased at certain points in that process.
So, somebody's job gets more efficient, for example. And I think that therefore we'll probably see the enhanced biological process being the thing that is the winner in this process overall.
Thanks for listening to Tech Tomorrow brought to you by Zühlke. If you want to know more about what we do, you can find links to our website and more resources in this episode show notes. Until next time.