European Pharmaceutical Manufacturer spoke to Sunitha Venkat, VP of data services and insights, Conexus Solutions, about the future of AI in biotech.
Shutterstock - RDVector
Everyone talks about AI adoption in life sciences, but how ready do you think companies are?
Readiness varies widely across the industry. Some organisations have already built centralised data platforms, governance structures, and analytics teams, and they are beginning to see measurable value from AI.
Others are still dealing with fragmented systems, manual processes, and inconsistent data maturity. Although most leaders understand the potential of AI, readiness is not defined by tools alone. It requires aligned business priorities, trusted and well governed data, clear ownership, and teams that know how to translate insights into compliant, real world action.
Without that foundation, AI remains experimental rather than impactful.
2. You mentioned fragmented systems and siloed data, in your experience, what tends to be the biggest blocker: technology, process, culture, or other?
The biggest blocker is lack of alignment. AI initiatives struggle when business, IT, data science, and vendors operate with different priorities. This pattern is consistent with industry findings that show organisations have difficulty scaling AI when workflows, ownership, and governance are fragmented.
An AI governance council is essential. A cross functional governance structure creates shared accountability, prioritises high value use cases, and establishes guardrails around data quality, compliance, and responsible AI. It also supports enterprise-wide AI education so employees understand what AI can and cannot do, how to interpret outputs, and how to use insights responsibly.
When alignment is in place, business teams define the decisions that matter, IT ensures the data and security foundation, data science builds the right models, vendors become true partners, and employees feel confident acting on AI driven insights.
There is a lot of pressure to do something with AI. How do companies avoid rushing into tools?
The most successful organisations stay focused on outcomes rather than technology. They begin with leadership alignment on what AI enablement means for the business and which decisions they want to improve. This clarity prevents teams from chasing vendor solutions and keeps them focused on solving meaningful problems.
Once leadership alignment is established, organisations can assess whether their data, governance, and processes are ready to support AI. Research shows that companies that redesign workflows and establish clear controls achieve stronger results than those that simply deploy tools.
Phased pilots then allow teams to learn, validate value, and build confidence without disrupting operations. When business, IT, and data science follow a shared, leadership backed roadmap, AI investments become intentional, scalable, and far less reactive.
What are the most common mistakes you see when life sciences companies try to layer AI on top of already existing systems?
A major mistake is underestimating the condition of legacy data. Inconsistent definitions, missing documentation, and process gaps surface quickly and erode trust in AI outputs. This challenge is widely reported across the industry as organisations attempt to scale AI beyond pilots.
Another mistake is treating AI as a standalone capability rather than embedding it into existing workflows. AI only creates value when it is integrated into how decisions are made and how work gets done.
A third mistake is overlooking the human element. AI amplifies expertise; it does not replace it. Successful organisations design AI to complement existing roles, with transparent governance, feedback loops, and ongoing training so employees understand and trust AI driven insights.
When business, IT, and data science align early, solutions become usable, explainable, and scalable. AI then becomes a true decision enhancement tool rather than a source of confusion.
How important is data governance and transparency before AI can deliver real value, especially in regulated environments like pharma and biotech?
They are absolutely essential. In pharma and biotech, AI outputs must be traceable, auditable, and explainable. Without standardised, high quality data and strong governance, even technically sound models can produce insights that are operationally unusable or noncompliant. Regulatory grade AI requires clear data lineage, documented decision logic, and robust guardrails.
Transparency builds trust with regulators and internal stakeholders. When teams understand how data is sourced and how insights are generated, AI shifts from a blackbox experiment to a reliable decision support capability. Governance also reinforces training and responsible use, ensuring that employees know how to act on AI outputs appropriately.
Do you think some organisations underestimate how much operational change AI actually requires?
Yes, very often. Many companies treat AI as a technology upgrade when it is fundamentally an operational transformation. Industry research highlights that AI only creates value when workflows are redesigned, roles are reskilled, and decision-making processes evolve.
Organisations that fail to redesign workflows or align cross functional teams often see adoption stall. In contrast, companies that plan for operational shifts through governance councils, leadership alignment, and continuous training achieve more sustainable results. This aligns with findings that only a minority of organizations have successfully scaled AI despite widespread experimentation.
What does an “AI ready” life sciences organisation look like in practice?
An AI ready organisation combines clean, integrated data with strong governance and close alignment across business, IT, and data science. Teams trust the data, understand the insights, and have transparent processes to act on them. Leadership encourages curiosity, experimentation, and responsible AI adoption while balancing speed with compliance.
In practice, AI ready organisations run pilots strategically, learn continuously, and scale what works. AI becomes part of everyday decision making rather than a series of disconnected initiatives. Employee training ensures that insights are interpreted correctly and applied consistently, and governance ensures that everything remains auditable and compliant.
