Shutterstock
Jerry Temko, Managing Director, In-House Counsel Group, Major, Lindsey & Africa.
In today’s world of business, artificial intelligence seems ubiquitous. Applications of the technology are springing up and revolutionising industries across the board, and the pharmaceuticals sector is no exception. AI tools that can aid in the discovery, analysis, and testing of new drugs is a particular area that has received considerable interest from businesses looking to accelerate and streamline the manufacturing of new medicines.
However, for all the promise that AI holds for the industry, there are several challenges and risks associated with the development and integration of emerging AI tools. Life science companies are therefore becoming more alive to the role that in-house legal teams will play in safely and successfully taking on the AI challenge.
There are a number of advantages of having tech-savvy legal minds in the room when strategising AI integration. One benefit is that lawyers have already made headway on adopting and using AI within their own department, and so have valuable experience in how to trial, select, and deploy AI technology. Secondly, whilst the regulatory landscape around AI is still evolving, in-house counsel will have the knowledge and insights needed to ensure that AI is adopted in accordance with the existing and upcoming regulatory safeguards and legislation.
Learning from Experience
The legal sector as a whole has been quick to seize the opportunities presented by AI, and in-house legal departments have been eager to explore how the legal operations function can be correspondingly better automated and streamlined. For example, teams are installing contract management systems to reduce work volumes from other departments and automating compliance approvals – a highly valuable process in highly regulated industries such as pharmaceuticals. Other areas that fall under a legal department’s jurisdiction, such as GDPR and data privacy laws, have also been subjected to the AI treatment.
This process has not been a straightforward one, and every step requires deep thought and creative solutions to build a viable strategy to successfully adopt AI. The same will be required for the automation of drug discovery, testing, clinical trials – practically all conceivable uses of AI-driven automation in the development and manufacturing of new medicines.
Insofar as pharmaceutical manufacturing is concerned, AI will provide numerous opportunities for manufacturing process improvement. AI can perform, process, design, and scale up advanced process control; process monitoring and fault detection and trend monitoring; reduce materials waste; improve production; and reuse and perform predictive maintenance. However, applicants will need to understand the applications of AI in manufacturing operations that are subject to regulatory oversight (e.g., CGMP compliance, new drug or biologics applications). This is where legal support will be indispensable.
In-house lawyers will be able to draw on their own experiences of taking on an AI transformation to help guide the wider business strategy of implementing AI tools. Their familiarity and deep knowledge of wider business operations will also help to identify where the technology will be of most benefit, what areas of the business are the most AI-ready, and what AI tools are likely to be the right fit. Overall, having a commercially-minded lawyer advising on the AI strategy can be hugely beneficial.
Putting up the Guardrails
As important as it is for in-house legal lawyers to play the role of experienced strategic adviser on implementing AI, for strictly regulated industries like pharmaceuticals, it is equally important to have a keen legal mind on the case to ensure that an AI transformation remains compliant. Regulators across jurisdictions are in the process of building a regulatory framework for the use of AI in pharmaceuticals, with the European Medicines Agency having recently released a guidance paper outlining recommendations for the use of AI and machine learning during each phase of a medicine’s lifecycle.
In light of this, we are increasingly seeing businesses recognise the need to have legal expertise with a deep knowledge and understanding of the complex and often overlapping laws around data privacy, transparency of AI algorithms, and the risks of AI failures or mistakes.
In our conversations with technology, legal and privacy experts, there are several major risk areas that pharma companies must prioritise safeguarding against through policies, procedures and monitoring to ensure the safe implementation of AI.
Quality control: AI tools can be prone to ‘hallucinations’, where the tool behaves erratically or erroneously due to input errors, or poor quality of data. This situation requires human quality control, necessitating specialised training on how AI is programmed to detect system faults.
Potential for litigation: The risk of copyright disputes can be high when using AI tools, as many of them work on the basis of sampling prior work products to create ‘new’ information. This is generally sourced from the internet, but it does not provide references. Unauthorised use of text or images could well trigger litigation requiring human oversight.
Privacy: AI-generated tools will collect and analyse large swathes of data, including in some cases personal information. This carries with it the responsibility to collect, store and share data securely, in line with the legal framework of the EU GDPR.
Confidentiality: When using an AI tool, it is commonly overlooked that any information provided may end up being produced to a third-party user, where it is then stored in an AI database beyond the using company’s control, even if it is not reproduced. Businesses must be conscious of this, and should consider prohibiting the input of confidential information into AI tools.
AI undoubtedly has the potential to revolutionise the pharmaceutical sector. However, as businesses look to integrate AI into their business model, decision makers should keep in mind the instrumental value that a tech-savvy legal mind with experience in working with AI has when setting out a strategy for adopting and integrating this emerging technology.