While generative AI has already captured the imagination as a means to transform pharma R&D processes, agentic AI brings new reasoning and autonomy to the table. ArisGlobal’s Jason Bryant unpacks the potential.
Agentic AI is now firmly established as the next big thing in AI, building on the success and rapid advancement of generative AI. It sees the coordination of specialist AI tools or “agents” to fulfil assigned goals – not by following prescribed rules, but by reasoning about the best way to fulfil that objective. It is a significant step forwards, due to the implied scope to redefine the way organisations operate - and the value they deliver.
With agentic AI, there is much greater autonomy in what AI does and how. Given a desired outcome, individual agents each harness their own intelligence, experience and reasoning to deliver their part in the most effective way possible, deciding what will be required, where to find it, and so on. All of this is co-ordinated by an orchestrator agent. As well as optimising the end-goal delivery, the orchestrator uses the collective insights to propose new ways to add value. Put another way, this next phase of AI is about so much more than optimised processes or enhanced productivity.
What new opportunities does this create?
The ability to reason, anticipate, generate insight and knowledge and make better decisions is well matched to late-stage pharma R&D activities, being data-intensive, process-heavy and outcome-critical.
Generative AI is already proving indispensable in functions such as regulatory affairs and drug safety/pharmacovigilance. Uses cases to date include marketing authorisation application preparation, product change control/regulatory impact assessment management, adverse event case processing, and safety reporting.
Agentic AI’s ambitions are greater still, offering to transform not only the output but also the value and purpose of Safety, Regulatory and adjacent teams.
Seeing ahead in real-world drug safety
In post-market drug safety, the use of AI to streamline Medical Dictionary for Regulatory Activities (MedDRA) coding of adverse events offers considerable potential to transform the value of pharmacovigilance.
AI technology is already helping to transform efficiency and accuracy around the classification of adverse event data, with the potential to invoke additional reference cross-checks, or expedite next actions. Combining autonomous MedDRA coding with proactive signal triage could help to eliminate manual bottlenecks. If designated agents detect an unusual combination of coded terms, for example, they could raise an automated “probable signal” alert; pre-populate a signal report draft (including proposed case lists, timeline and supporting evidence snippets); and recommend a triage priority for human safety reviewers.
The time to first credible signal would be shortened, and experts freed to focus on ambiguous/novel cases and investigation design. Meanwhile the system could route high-risk clusters to epidemiology/medical affairs automatically and suggest immediate risk-mitigation actions (e.g., targeted communications, batch holds, enhanced monitoring), enhancing human decision-making.
Removing pain and delay from regulatory rigour
Regulatory opportunities for agentic AI include reinventing the global management of product regulatory compliance. Autonomous, “regulation-aware” dossier assembly and submission orchestration is within reach now. It is possible for orchestrated AI agents to continuously ingest clinical data packages, study reports, CMC documents, eTMF pointers and legacy submission artefacts. Agentic systems can also perform automated regulatory gap-analysis versus target-region requirements, draft region-specific CTD/eCTD modules (with citations and traceability to source documents), and orchestrate the technical packaging (file naming, folder structure, etc).
Where there is any ambiguity, the agentic system could generate a short “decision rationale” and a list of recommended human checks, and run a rules/validation pass (file integrity, cross-reference checks, local appendices). This could inform autonomous routing of items to subject experts (e.g., CMC, clinical, labelling) with suggested edits and severity scores – providing human reviewers with a near submission-ready dossier.
Strategically, shorter regulatory cycle times promise to accelerate go/no-go decisions and speed up patient access, while sponsors would be in a position to iterate protocols more swiftly. Meanwhile agents’ gap-analysis outputs could be fed upstream to clinical operations and protocol teams, enabling trials to be designed that need fewer regulatory clarifications further down the line.
Keeping control
When AI systems are given new autonomy across extended workflows, the potential risks go beyond incorrect outputs, to include potential for unintended data movement, loss of operational control, misaligned decision-making and blurred lines of accountability.
Having guardrails - a way of applying “bounded autonomy” – is essential, to mitigate unintended behaviour, but being too prescriptive and fixed about controls could hamper future potential. While multi-agent frameworks are emerging, these do not inherently provide for trust, context-sensitive decision making or risk-aware governance. Such considerations need to be both designed in from the start, and able to adapt to evolving needs so that risk mitigation doesn’t stifle future value.
Taking a more facilitating, principles-based approach, rather than one that is hard-wired around specifics, is a good way to support process stakeholders in defining scenarios and goals that agentic AI could help address. An interesting contribution here comes from the Council for International Organisations of Medical Sciences (CIOMS)’ Working Group XIV on AI in Pharmacovigilance. It aims to create a common foundation for regulators, industry and technology providers that can keep pace with the unprecedented rate of technological advancement.
Companies can complement such principles with their own systemic-thinking or service-design methods – e.g. developing journey maps to plot how agentic workflows trigger, interact and evolve – to help translate high-level principles into operational governance models, including the degrees of autonomy afforded to individual agents.
This work can then inform adaptable provisions for human involvement (e.g. as technology continues to advance, and as confidence grows). This will allow companies to move at their own pace towards trusted use of agentic AI and the full range of available benefits.
