Updesh Dosanjh, practice leader, Pharmacovigilance Technology Solutions, IQVIA explains how Artificial intelligence (AI) is now deeply embedded in pharmaceutical operations, especially in pharmacovigilance.
From flagging adverse events to automating data entry and accelerating reporting timelines, AI is reshaping how safety is managed. However, one persistent challenge remains: defining success in a way that reflects both operational performance and human outcomes.
Many organisations still focus on surface-level metrics, such as hours saved or faster processing speeds. While these are useful indicators, they do not capture the broader impact of AI or the critical role of the human behind the machine. In a sector where safety, clinical judgment and patient trust are central, AI should be seen not as a replacement for professionals, but as a tool that enhances their expertise and allows them to focus on higher-value work.
Success requires cultural investment
Though technology continues to evolve at a rapid pace, it still cannot succeed in isolation. Adoption of tools like AI or large language models all hinge on trust, and trust is built through involvement. Instead of imposing AI tools from the top-down, successful organisations will involve pharmacovigilance and clinical stakeholders early in the process. One effective strategy is to create cross-functional working groups that help define requirements, test prototypes and provide structured feedback. This approach not only avoids costly rework later but also helps tailor the technology to real-world workflows.
Training is equally critical. Instead of relying solely on vendor-led sessions, companies should supplement AI onboarding with mentorship from internal “super users” who deeply understand both the technology and the business context. These champions bridge the gap between technical capabilities and real-world applications, showing colleagues how AI can support their roles rather than compete with them.
Survey results from 2025 found that 58% of employees using AI reported reduced stress, and 82% said it improved the quality of their work. Measuring internal sentiment through regular surveys or tracking retention rates can help assess adoption success. However, the clearest indicator is when teams voluntarily participate in AI pilot programs. That level of engagement reflects genuine trust in the tools — and a belief that AI is there to empower, not replace.
Looking beyond ROI
As more organisations use generative AI for tasks like drafting adverse event narratives, many report time savings, but the greater value lies in enabling skilled reviewers to focus on complex clinical assessments. The question organisations should be asking is not “How many hours were saved?” but rather, “What higher-value activities did this enable?”
According to Deloitte, the internal rate of return on AI-related pharmaceutical projects increased from 1.2% in 2022 to 4.1% in 2023. That rise was attributed to more targeted use cases and stronger alignment between technology and business goals. Organisations that measured downstream impact, such as regulatory accuracy, faster signal detection and improved staff allocation, saw the most meaningful results.
To replicate that success, companies need to go beyond financial modelling and adopt a broader view of what AI-driven success looks like.
Building a meaningful metrics framework
A more effective approach begins with four dimensions: operational optimisation, resource reallocation, analytical quality and workforce engagement. Each requires its own method of measurement.
Optimising operations: Traditional workflows, especially those without AI support, tend to be fragmented and time-intensive. Integrating AI enables the automation of tasks like data processing and report generation, which simplifies operations and accelerates output. This not only boosts efficiency but also supports timely regulatory compliance. Findings from Artificial Intelligence in Pharmaceutical Technology and Drug Delivery Design explain how AI contributes to optimisation across formulation, delivery and predictive modelling.
Enhancing analytical accuracy: In pharmacovigilance, maintaining high data quality is critical. As more professionals touch the data, the potential for human error increases. AI can reduce this risk by standardising how data is processed and analysed, resulting in more accurate and consistent safety reporting. With improved data integrity, adverse events can be identified and communicated more efficiently, strengthening patient protection.
Refocusing expert capacity: Tasks such as data entry or initial case review, while necessary, often underuse the skills of trained professionals. AI-driven automation allows clinical and safety experts to shift their attention to higher-value work, such as signal evaluation or regulatory analysis. This shift to enable teams to engage in more meaningful and intellectually demanding responsibilities improves productivity and leads to greater job satisfaction.
A regulatory perspective
From the European Medicines Agency’s perspective, consistency, traceability and timely reporting are non-negotiable and AI tools must meet these standards. But regulators are also scrutinizing how companies govern these systems. Success here means documenting how AI decisions are made, logging user overrides and maintaining clear, auditable records.
In practice, this could involve embedding explanatory features into AI dashboards, allowing users to see which inputs influence a given recommendation. It may also mean involving regulatory affairs teams early in the development process to ensure that AI outputs meet submission requirements under Good Pharmacovigilance Practices.
Final thought: Aligning AI with purpose
Pharmaceutical companies exist to improve lives, and every tool, including AI, should be evaluated by how well it supports that mission. For AI, the real measure of success is how effectively it enhances human performance, safeguards patient safety and enables professionals to focus on the complex judgments only they can make.
The technology is here, and it is ready to be applied. The question now is whether the frameworks for measuring its impact are equally ready to rise to the occasion.