David Garratt, principal consultant - Automated IT Governance, Risk, Compliance, Verista, explores how shifting to “governance as code,” starting with practical automation of core controls, adopting risk-based AI classification, and cultivating a new generation of governance specialists can help regulated organisations close that gap and scale AI responsibly.
Verista
Artificial intelligence is rapidly moving from experimental tools to embedded operational technology across regulated industries. From process optimisation and quality trend analysis to document review and decision support, AI is beginning to influence how work is done, not just how fast it gets done. Yet while adoption accelerates, governance programs have struggled to keep pace. Many organisations are still relying on static policies, manual audits, and documentation-heavy oversight models that were never designed for systems that learn, adapt, and operate at scale.
The result is a widening gap between how AI systems actually behave in production and how they are governed on paper. Closing that gap requires more than new policies or additional review committees. It requires rethinking governance itself, not as a set of written instructions, but as an operational capability built into systems, workflows, and infrastructure from the start.
As AI capabilities advance, a basic “responsible use of AI” policy also becomes part of that foundation. Similar to annual cybersecurity awareness programs, these policies set organisation-wide expectations for when and how AI tools may be used, which use cases are prohibited or restricted, and when additional review is required before AI output can influence regulated activities.
Why traditional governance models break down under AI
In regulated environments, most governance frameworks evolved around predictable systems. A traditional application behaves the same way every time it runs. Controls can be documented, validated once, and periodically re-verified through scheduled audits. Deviations are exceptions, not the norm. AI changes that equation.
AI systems are probabilistic rather than deterministic, producing outputs that vary even when inputs appear similar or are identical. They evolve over time as data changes, models are retrained, and usage expands. When governance relies too heavily on rigid standard operating procedures, this variability creates friction. Teams end up documenting exceptions rather than managing risk, writing deviations instead of improving controls.
In highly regulated sectors such as life sciences, this rigidity can become counterproductive. Overly prescriptive SOPs may force teams into constant remediation cycles as soon as real-world
use diverges from documented intent. Instead of supporting responsible innovation, governance becomes a bottleneck, slowing adoption without meaningfully improving safety or compliance.
Governance as code: Shifting from documentation to control
A more resilient approach treats governance as something systems do, not just something people write about. Often referred to as “governance as code,” this mindset shifts oversight away from manual verification and toward automated enforcement embedded directly into workflows and platforms. In practice, governance as code means using tools to define and enforce rules continuously. Infrastructure configurations, access controls, encryption standards, identity management, and change workflows can all be governed through automated checks rather than periodic reviews. Instead of validating compliance once and hoping nothing drifts, organisations can verify controls every time a system is deployed, modified, or accessed.
This approach does not eliminate documentation, but it changes its role. Written policies move up a level, articulating principles, intent, and accountability, while the operational details are enforced by the systems themselves. For auditors and regulators, this creates a clearer picture of actual control effectiveness, supported by real-time evidence rather than static artefacts.
Laying the groundwork: Automating what matters first
Organisations beginning this transition often see the fastest progress by starting at the framework level rather than attempting to automate every control at once. Governance, risk, and compliance platforms provide a structured way to map regulatory standards to defined controls, track accountability, and assess maturity across domains such as change management, incident response, and access control.
Infrastructure is another natural starting point. Configuration settings, such as encryption, multi-factor authentication, and network exposure, are well suited to automated enforcement. Cloud platforms and third-party tools already provide mechanisms to define guardrails that apply regardless of how systems are provisioned. When infrastructure controls are consistently enforced, they form a stable foundation for the applications and AI systems built on top of them.
The human dimension is equally important. Quality and compliance professionals must become fluent in the tools that implement these controls. Governance as code does not replace governance roles; it elevates them. Oversight shifts from writing procedures to designing, monitoring, and refining the systems that enforce them. Over time, this evolution will give rise to a new kind of
governance specialist who blends traditional quality and risk expertise with fluency in cloud platforms, GRC tooling, and automation pipelines.
AI governance requires flexibility by design
While governance as code provides the structural backbone, AI governance introduces additional complexity. AI systems often operate across organisational boundaries, touching IT, business units, legal, ethics, and quality functions simultaneously. Effective oversight depends on shared understanding, not only of regulatory expectations, but also of what AI can and cannot reliably do.
One of the most common early mistakes organisations make is applying traditional validation expectations too rigidly to AI. Attempting to lock down AI behaviour through overly detailed procedures can backfire, generating constant deviations as models evolve or use cases expand. A more effective approach establishes high-level principles and baseline controls while allowing flexibility at the implementation level.
Smaller, narrowly scoped AI projects often provide the best learning environment. When subject matter experts are familiar with the underlying data and expected outcomes, they can assess whether AI outputs align with reality and adjust accordingly. This approach reinforces accountability while acknowledging AI’s probabilistic nature.
Ethics and responsible use should be embedded into these early pilots as well. Narrowly scoped projects with well-understood data sets give teams a controlled environment to test for issues such as bias, inappropriate use of personal information, or overly strong reliance on AI outputs, while keeping subject matter experts directly in the loop to validate results.
Risk classification as the cornerstone of AI governance
Rather than attempting to govern all AI systems equally, leading organisations are adopting risk-based classification models. These frameworks evaluate AI use cases along dimensions such as intended purpose, level of autonomy, impact on regulated activities, proximity to patient or product safety, and the ability for humans to verify outputs.
Systems that support low-risk, informational tasks, such as summarising internal documents or assisting with research, can tolerate occasional inaccuracies if the consequences are minimal. In contrast, AI systems that influence regulated decisions, manufacturing processes, or patient outcomes require far greater oversight, validation rigor, and transparency.
Human verifiability is critical to this assessment. When experts can reasonably evaluate AI outputs against known expectations, risk is reduced. When outputs cannot be independently verified, governance must account for that uncertainty through stricter controls or limited deployment. Classification frameworks transform abstract ethical concerns into actionable governance decisions, guiding how AI systems are approved, monitored, and evolved over time.
Governance that evolves with the technology: From principles to practice
Within regulated environments, translating AI governance theory into operational reality requires both domain expertise and technical fluency. Verista, a provider of automation, compliance, and quality solutions for the life sciences industry, operates at this intersection, supporting organisations as they modernise governance frameworks without losing sight of regulatory obligations.
In many organisations, IT and engineering teams are already automating checks and guardrails as part of their normal work, but those controls are not always visible to quality or compliance stakeholders. Making those existing safeguards explicit, mapping them into formal control frameworks, and evidencing them through GRC platforms allows companies to “take credit” for the automation they already have in place while identifying true gaps that require new investment.
Verista’s work reflects the core principles outlined above: embedding governance into systems rather than relying solely on documentation, aligning quality and IT stakeholders around shared controls, and applying risk-based thinking to emerging technologies. By helping organisations map regulatory expectations to automated controls and governance frameworks, Verista enables continuous compliance while preserving flexibility for innovation.
Rather than treating AI as an isolated compliance challenge, Verista positions it within the broader governance ecosystem, linking AI oversight to data governance, infrastructure controls, and lifecycle management. This integrated approach allows organisations to scale AI responsibly, adapting controls as systems evolve rather than rewriting governance from scratch.
AI governance is not a one-time implementation. As systems become more autonomous and complex, governance frameworks must evolve alongside them. Foundational elements, such as data governance, risk classification, and automated controls, create a stable baseline, but continuous learning is essential. That continuous learning must apply not only to models and tools, but also to the governance workforce, so that long-standi
ng quality and validation knowledge is transferred into a new generation of practitioners who are comfortable working directly with automated controls and AI-enabled systems.
Organisations that succeed view governance as a living system. They start with high-level principles, implement
controls appropriate to current risk levels, and refine oversight as understanding deepens. By resisting the urge to over-engineer governance early and instead focusing on adaptability, they position themselves to respond thoughtfully as AI capabilities mature. In this way, effective AI governance becomes less about predicting every possible outcome and more about building resilient structures that support responsible decision-making over time.


