What’s the best way to introduce AI to enhance automation of high-volume pharmacovigilance activities, such as adverse event case intake and processing?
Claudia Lehman, head of global pharmacovigilance operations at Boehringer Ingelheim and Lucinda Smith, ArisGlobal’s chief safety product officer, reflect on the critical success factors.
Over the last five years or more, Boehringer Ingelheim has been actively harnessing technology-based automation to streamline and boost its pharmacovigilance activities. What inspired the company to implement AI in a patient safety context?
Lehman: We had seen some impressive efficiencies and quality gains from our use of rule-
We could see the possibility to apply the technology in aggregate document writing, analytics and other tasks in due course. But the important thing was to get started – to build experience and expertise in a relatively safe space, and then broaden its application over time. We identified case processing/case intake as good use cases for AI. Compared with other projects in patient safety, this type of application is easily controllable through human review.
Smith: The risk with delaying AI uptake until a more optimal moment is that you risk the early wins, such as operational efficiencies, as well as the chance to build knowledge. With the pace of technology advancement as it is, the longer you wait, the more you fall behind.
How did you progress? Did you establish a formal AI project?
Lehman: No, we defined this as a technical initiative because we were starting small. We wanted the scope to control and keep monitoring what we were doing, and we wanted to do this in a very quick and flexible way, close to the topic decision making which would enable continuous interaction.
Our computer system validation group guided us too - in how to factor in use of AI in our validation and testing plans, and in the risk assessment we would need to do (what we would have to document; and how we would assess mitigation and outline quality control processes). You can’t just jump into AI without understanding and planning for all of this.
We had to factor in our case processing vendor here too, because they would be performing the quality control - so they would need to understand where the information is coming from, and how AI-enabled activity would differ from existing, validated, rule-based automation. Taking the time to work through this also ensured the vendor’s team didn’t see the technology as a threat.
Did you encounter any technical issues?
Lehman: During early testing, we experienced some issues arising from differences in the interfaces. This is because the AI functionality was integrated into existing workflow automation. It was a useful reminder of the need to consider and re-validate not just individual elements but also the overall process, when introducing or enhancing automation. Process qualification means testing that everything that goes in the AI engine comes back, for instance; that the fields are extracted into the right data points in the system; and that the whole process still works within the system. Initially that wasn’t the case but, with adjustments in the system, we made it work.
Smith: Even at this scale of initiative, change management is critical – due to changes to the way people operate, to their mindsets, to processes, and to culture. There is a need to build trust, as well as skills.
Lehman: We had an advantage here, having started our automation journey at least five years earlier. We had experience of where and when reviews of data are needed, for instance. We have adopted the “Four Eyes” principle (a second set of eyes) which has been helpful in building confidence. It ensures an optimum level of control over data as we defer increasingly to AI. We need to be mindful of the risks. We’re providing a wealth of guidance for users on everything from “What is AI?”, “What is an algorithm?”, and “What is inference?”, to “What are hallucinations?”, and “What is the risk?”.
There is a middle ground between blindly trusting AI and being so risk-averse that you reject the technology, but teams do need to understand safe use and the personal accountability that sits with each user.
How do you see people’s roles evolving, as AI becomes more embedded in PV?
Lehman: Introducing AI presents a chance to challenge the way things are done, and review whether there is scope to reinvent a process in an electronic/digital context. The end goal is always good-quality data and a robust PV system, but there probably does need to be an evolution of PV roles.
If we look across a whole process, for instance, where can AI truly help and add value, and where do experts need to jump in? Where will targeted training help people make a bigger difference? That could be in analysing exceptions, for instance, as technology takes over more of the manual work. The same scrutiny can be applied at the case processing vendor’s side. The more that these companies can harness automation options including AI to streamline transactional work, the greater the scope for their own teams to add new value.
What recommendations would you give to other pharma companies considering deploying AI in their own PV activities, based on what you’ve achieved and learnt?
Lehman: It would be to get started so they can build experience. If you start small, you have a chance to iron out issues before extending AI-based automation to larger work volumes or new use cases.
The broader opportunity is to capture rich information that could otherwise be missed, from free-text patient narration. Every individual that gets in touch and recounts their story adds to our understanding of the safety profile of the drugs. Even a non-serious case might include something in the free text that points to serious event information, and we owe it to the safety of our patients to capture and harness more of those critical insights.


