Watch the management summary
Breaking the trust barrier - management summary
It started with a graphic I asked an AI tool to generate. The request was simple enough: compare company rankings from two different years and make a chart. Unfortunately, I didn’t take the time to review the output before sharing it, and the AI had taken some liberties. Rankings were imagined or didn’t quite add up, and companies appeared out of thin air! The graph seemed to be plotting data from a parallel universe.
It wasn’t catastrophic, but the experience got me thinking: how much trust in technology does an environment like pharmaceutical manufacturing demand? In this sector, where precision and consistency are paramount, trust in the infrastructure, tools, and processes making today’s pharmaceuticals isn’t just a nice-to-have. It’s the foundation of the entire business. So, as pressure mounts to adopt digital and AI solutions on production floors, the decision to deploy depends on more than technical capability. It hinges on confidence.
Why trust is the gatekeeper
Pharmaceutical companies are no strangers to innovation. But they operate in an industry where risk tolerance is low, and the stakes are high. While AI and digital applications promise to replace fatigued and overly burdened legacy systems with efficiency, insight, agility, and responsiveness, they also introduce extensive complexity.
Several factors influence trust in digital and AI solutions:
- Regulatory clarity: Is there a framework for validation?
- Operational reliability: Can the system perform consistently?
- Transparency: Can stakeholders understand and explain the model’s decisions?
- Security and privacy: Is data protected across the lifecycle?
- Expectations and readiness: Does the proper context for deployment and use exist?
- Mindset and habits: Does deployment require a disruptive break in how people work?
These questions often delay adoption. The technology is promising, but trust hasn’t caught up.
So then, where is the trust?
In this article, we qualitatively assess which applications are advanced enough for easy deployment and have gained the trust of industry players, and which are still earning their place.
The result is a landscape view of solutions that are gaining traction in pharmaceutical manufacturing, with peaks of technological maturity and valleys of scepticism, but more importantly, with landmarks to guide large-scale digital and AI transformation.
An AI and digital trust scorecard
We’ve selected a list of applications and development pursuits to streamline and fortify pharmaceutical manufacturing through AI, digital systems, and automation. The list is not exhaustive. Yet, we’ve included major cross-industry trends in future-proofing manufacturing, including solutions for transferring technology to production lines, building smarter processes, and aligning data governance and digital infrastructure. Our scorecard ranks these applications along two dimensions:
- Development state: How mature are the underlying technologies?
- Level of trust: How much confidence has the application garnered within the industry?
To each application, we assigned a score between 0 and 5 along each of these dimensions, with development state ranging from early development (0) to already in use (5), and level of trust ranging from low (0) through moderate (3) to high (5). Our scores are high-level and qualitative, based on research of industry literature and conversations with topic and industry experts.
From the scorecard, four patterns emerge:
Manufacturing applications top the lineup
Applications that align with the structured and standardised environment of manufacturing are the furthest along, both in terms of development and trust. While adoption in pharmaceutical manufacturing is making progress, other industries have embraced the mindset, tools, and operations of AI and digital solutions.
Governance and infrastructure are priorities
Tools and functions in this area are foundational. Automation, AI, and digitalisation are meaningful only in the wake of orchestrated, scalable, and secure data flow. Urgency has propelled ideas and practices in this area to the forefront, and trust in them is high as the ensuing frameworks mature.
Tools for technology transfer show promise but need validation
Applications to help move new products from labs to commercial production span the full breadth of development stages. Even uses with tested improvements from AI-driven analytics continue to be human-monitored. However, it is precisely these tangible benefits that drive the necessary fine-tuning for wider adoption.
Complexity and divergence hinder trust in cross-functional solutions
The applications in this category aim to solve far-reaching challenges. Thus, it’s impossible to pin down a single reason for their place in our lineup. A paucity in guidance for their continued development, befitting data architecture, and clear accountability lines are hurdles to overcome.
Manufacturing: Where AI is already delivering
Manufacturing is the domain where digital and AI applications have found the most solid footing. Many tools are not only technically mature but also widely trusted, thanks to years of operational use and clear regulatory support.
Process Analytical Technology (PAT) and Continuous Process Verification (CPV) are prime examples. PAT, with its use of inline spectroscopy and chemometrics, has been accepted for nearly two decades (FDA, 2004). CPV, endorsed under ICH Q10 and Q11 (EMA, 2012), is routinely implemented to monitor process consistency. These applications benefit from well-defined validation pathways and are often augmented by AI to enhance trend analysis.
Real-time release testing (RTRT) builds on parametric release, PAT, and predictive modelling to enable faster product release decisions. Regulatory bodies are open to this approach. Initially applied to sterility testing, guidelines have expanded RTRT application to chemical and biological products, supported by robust validation packages that reinforce trust (EMA, 2012). It’s important to note that RTRT can be combined with conventional end-product testing in a hybrid approach (Lundsberg-Nielsen, 2021).
AI-driven quality control and anomaly detection systems are also gaining ground, particularly in fill-finish operations. Vision systems used in these contexts are GMP-validated, and the availability of well-defined false positive metrics contributes to their credibility (Vuolo, 2023).
Other applications, such as predictive maintenance, manufacturing automation, and integrated batch and paperless production, are considered mainstream. Electronic batch record systems (eBRS) are trusted tools, and AI can be deployed to assist in deviation triage. Predictive maintenance is an active area of development and has demonstrated clear ROI and downtime reduction (Kavasidis, 2023) through sensor analytics. Finally, manufacturing automation supported by mature Programmable Logic Controller (PLC) and Supervisory Control and Data Acquisition (SCADA) layers often uses AI in advisory roles (Folgado, 2024) thereby minimising perceived risk.
Even more complex tools like scheduling optimisation are being deployed, though trust varies depending on how transparently business rules are encoded. When AI components are added without clear logic, confidence can waver.
Overall, manufacturing applications are the most advanced and trusted across our scorecard. Their success is rooted in operational maturity, regulatory clarity, and demonstrable impact.
Governance and infrastructure: Foundations of trust
Governance and infrastructure applications form the backbone of digital transformation in pharmaceutical manufacturing. They play a foundational role in ensuring compliance, security, and operational integrity. Trust is essential, but implementation falters due to the complexity of multifaceted problems and a lack of harmonisation.
Cybersecurity and data privacy controls are well-established, supported by standards like ISO 27001 and 21 CFR Part 11. Regular audits and mature implementation practices make these systems highly trusted across the industry. As AI tools become more widespread, cybersecurity must evolve to address new risks - including unauthorised tool use.
AI governance and risk management is also maturing. Frameworks such as the AI Risk Management Framework from the National Institute of Standards and Technology (NIST, 2022) and the GAMP 5 (Good Automated Manufacturing Practice) AI guidelines (ISPE, 2025) are being integrated into quality management systems. While adoption varies, the underlying concepts are well understood and increasingly accepted.
AI ethics and regulatory alignment is a newer area, but momentum is building. Statements from regulatory bodies, like FDA discussion papers (FDA, 2025), are helping shape expectations, and emerging standards like ISO 42001 are beginning to provide structure. Trust is growing as these frameworks become more concrete.
Cloud-based manufacturing systems are technically advanced, with validated Software-as-a-Service (SaaS) Manufacturing Execution Systems (MES) available. However, concerns around data sovereignty and latency continue to limit full confidence, especially in highly regulated environments.
Taken together, governance and infrastructure applications are crucial enablers of digital transformation. Their trust levels indicate their technical maturity and compliance with regulatory and operational standards. However, their deployment demands exceptional foresight and alignment.
Tech transfer: Innovation meets caution
AI and digital tools are beginning to make their mark where pharmaceutical products transition from lab-scale to full-scale manufacturing. However, their adoption is tempered by the complexity of scale-up and the rigorous demands of validation.
Digital twins for scale-up, for instance, have shown strong promise in simulating process behaviour and predicting outcomes in silico. Yet, their commercial deployment remains limited (Schmidt 2025). The burden of maintaining accurate models across diverse process conditions, coupled with the lack of a robust track record, keeps trust levels moderate.
Similarly, AI-guided process parameter optimisation is emerging, particularly in hybrid setups that combine Process Analytical Technology (PAT) with machine learning. While these systems offer potential for refining processes, they face scepticism due to limited full-scale validation and challenges in model explainability—an essential requirement in regulated environments.
Automated data handoff and lineage tools are further along. These systems aim to ensure seamless data integrity between platforms like Electronic Lab Notebooks (ELNs) and Manufacturing Execution Systems (MES). Their trustworthiness hinges on the robustness of audit trails and the ability to maintain traceability across functions.
More mature applications such as risk-based process verification and real-time multivariate control have gained traction, supported by regulatory frameworks like ICH Q12 (EMA, 2020) and FDA guidance on Continued Process Verification (CPV; FDA, 2011). These tools enhance statistical monitoring with AI-driven insights, though they are still typically overseen by subject matter experts to ensure reliability.
Technology transfer applications are clearly advancing, but competing priorities and dependencies have extended their development times. Their trust levels reflect both technical promise and the cautious pace of validation in pharmaceutical environments.
Cross-functional tools: Promise meets complexity
AI and digital applications that span departments or support enterprise-wide functions are technically ambitious, so they face long development ramp ups. Earning increased trust from the industry will depend on overcoming performance gaps that become blatantly visible when applications attempt to capture the exploding degrees of freedom of operating across industry functions.
Enterprise-wide digital twins, for example, have proven effective in areas like utility and energy management. However, scaling these models across diverse manufacturing and business functions requires extensive data harmonisation and understanding. Without consistent data standards and connectivity, trust remains moderate. Similar to AI applications for resource optimisation and sustainability, a push in development and trust will grow as financial and environmental impacts become measurable.
Generative AI tools for reporting and knowledge management are progressing rapidly, but concerns around hallucinations and potential leakage of personally identifiable information (PII) limit their deployment in GMP environments. Few implementations have achieved full validation, keeping trust levels low to moderate.
Explainable AI (XAI) is a global goal. Computation methods like SHAP (Shapely Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) have gained attention as catalysts for XAI by providing human-understandable explanations for decisions generated by AI models (Salih, 2024). Regulators emphasise transparency, and pharmaceutical companies are beginning to explore these tools. However, the lack of industry-specific guidance and standardisation means that trust is still in development.
Pilots of Augmented Intelligence, models of Human-AI collaboration, show promising results. However, these systems rely on clear accountability frameworks to ensure that human oversight remains central. Thus, their success depends on how well roles and responsibilities are defined.
In essence, cross-functional tools are advancing in capability, but their trust levels reflect the complexity of integration, the need for harmonised data, and the evolving nature of governance models.
How to start: Building trust while deploying intelligently
Examining these patterns, the landscape that emerges from our scorecard mirrors the inherently cautious nature of the pharmaceutical industry. It also reveals a forward-moving momentum and landmarks to guide an implementation runway:
- Start where trust is already earned. Focus on applications with proven GMP validation and regulatory support to address specific and well-characterised production bottlenecks. That doesn’t mean think small scale. It means using the privilege of trust to explore, test, and learn on a large scale.
- Adopt incrementally but set your sights on transformation. Pilot, validate, and scale—don’t rush deployment, but aim for impact. Emphasise tractable improvements and use what you learn to target increasingly larger operational units within your organisation.
- Build internal champions. Engage subject matter experts early to validate models and interpret results. Ensure that they have a hand in evolving the models they oversee, as well as reporting on lessons learned and best practices that can inform subsequent deployments.
- Align with regulators. Use existing frameworks (e.g., ICH, GAMP, ISO) to guide implementation. Better yet, engage in shaping guidelines and guardrails for applications where trust is still low.
- Measure impact. Track ROI, downtime reduction, and quality improvements to build confidence. Use that knowledge to inform the design of solutions that can reliably take over tasks that tie down human creativity or exceed human capabilities.
Trust may be a gatekeeper. But it’s not a barrier.
Digital and AI solutions for pharmaceutical manufacturing are not “a possibility”. Our scorecard shows that many applications are ready, and others are close behind.
By strategically adopting applications that have already earned trust and using established frameworks, pharmaceutical enterprises can pave the way for a more efficient, transparent, and resilient manufacturing environment. I see a future human-in-the-loop ecosystem where AI and digital tools augment human manufacturing expertise. That human-machine alliance will unchain the industry’s knowledge and imagination to pursue truly original ideas in an agile and responsive context.
Ladies and gentlemen, start your engines.




