GLOSSARY OF KEY AI GOVERNANCE TERMS

Demystifying AI Governance, Safety, Security & Compliance

AI governance is crowded with overlapping terms, vague jargon, and regulatory shorthand.

This glossary exists to bring you clarity.

It explains key concepts in plain language, helping leaders, risk & compliance teams, and AI practitioners understand how AI actually behaves in organisations, and how it should be governed in practice.

A

Audit Trail (AI Systems)
An audit trail is a structured record capturing how an AI system was designed, trained, configured, deployed, and used over time. It enables traceability of decisions, supports regulatory scrutiny, and allows organisations to investigate incidents, demonstrate accountability, and improve systems based on evidence rather than assumptions.

Adversarial Attacks
Adversarial attacks involve intentionally crafted inputs designed to exploit weaknesses in AI models, causing them to behave incorrectly while appearing to function normally. These attacks expose the fragility of many AI systems and highlight why robustness and security testing are essential components of responsible deployment.

Automated Compliance
Automated compliance refers to the use of technology to continuously enforce governance rules, monitor AI behaviour, and generate evidence for audits without relying solely on manual processes. It helps organisations scale oversight, reduce human error, and ensure controls are applied consistently across AI systems.

AI Compliance
AI compliance is the practice of ensuring that AI systems meet applicable legal, regulatory, and standards-based requirements throughout their lifecycle. This includes obligations related to transparency, accountability, privacy, documentation, and human oversight under frameworks such as the EU AI Act, GDPR, and ISO standards.

AI Security
AI security focuses on protecting AI systems from misuse, manipulation, and compromise. This includes defending against threats such as data poisoning, model theft, adversarial inputs, supply-chain vulnerabilities, and prompt-level attacks, all of which can undermine system integrity and trust.

AI Safety
AI safety addresses whether AI systems behave as intended and avoid causing harm, even in edge cases or unexpected conditions. It is concerned with failure modes such as hallucinations, uncontrolled autonomy, and unreliable outputs, particularly where AI systems influence high-impact decisions.

AI Guardrails
AI guardrails are constraints embedded into systems to limit unsafe, non-compliant, or undesired behaviour at runtime. They may restrict outputs, enforce policy boundaries, require human approval, or trigger intervention when predefined thresholds are crossed, helping translate governance intent into operational control.

AI Risk Management
AI risk management is the ongoing process of identifying, evaluating, mitigating, and monitoring risks introduced by AI systems. These risks may be technical, legal, ethical, or operational, and must be managed continuously as models evolve, data changes, and usage expands.

AI Governance
AI governance is the organisational framework that determines how AI decisions are made, supervised, and challenged. It combines policy, accountability, risk management, and operational controls to ensure AI systems are aligned with regulatory obligations, organisational values, and real-world responsibility.


B

Bias in AI
Bias in AI occurs when systems produce systematically unfair or distorted outcomes due to skewed data, design choices, or contextual misuse. Left unmanaged, bias can create legal exposure, reputational damage, and ethical failures, particularly in systems affecting people’s rights or opportunities.


C

Continuous Validation
Continuous validation is the practice of regularly testing AI systems in live environments to ensure performance, safety, and compliance remain intact over time. It addresses risks such as model drift, emerging bias, and changing data conditions that static testing cannot capture.


D

Data Privacy (in AI)
Data privacy in AI concerns how personal and sensitive data is collected, processed, stored, and used by AI systems. It requires safeguards such as minimisation, access control, anonymisation, and lawful processing to ensure compliance with data protection regulations and maintain user trust.


E

Explainability (XAI)
Explainability refers to the ability to understand and articulate how an AI system arrives at a particular output or recommendation. It is essential for regulatory compliance, internal accountability, and human oversight, especially where decisions have legal or ethical consequences.

EU AI Act
The EU AI Act is the European Union’s comprehensive regulatory framework for artificial intelligence. It classifies AI systems by risk level and imposes escalating obligations on high-risk uses, including governance, documentation, oversight, and post-deployment monitoring.


H

Hallucinations
Hallucinations occur when AI systems generate outputs that are confident and fluent but factually incorrect or fabricated. They represent a fundamental reliability risk, particularly in decision-support, legal, medical, or customer-facing contexts where accuracy matters.

Human Oversight
Human oversight ensures that AI systems remain subject to meaningful supervision and intervention. It defines when humans must review, approve, override, or halt AI-driven actions, preventing blind reliance on automated decisions in critical scenarios.

High-Risk AI Systems
High-risk AI systems are those whose failure or misuse could materially affect safety, fundamental rights, or access to essential services. Regulations treat these systems differently due to their potential impact, requiring stricter governance, validation, and accountability mechanisms.


I

ISO 42001
ISO 42001 is an international management system standard that sets requirements for governing AI responsibly. It provides a structured approach to roles, controls, policies, and continual improvement across the AI lifecycle.


M

Model Drift
Model drift describes the gradual degradation of AI performance caused by changes in data, behaviour, or operating conditions. Without monitoring and intervention, drift can lead to silent failures, compliance breaches, and unreliable decision-making.

Model Lifecycle Management (MLM)
Model lifecycle management governs how AI models are developed, tested, deployed, monitored, updated, and eventually retired. It ensures that models remain fit for purpose and compliant as conditions change.

Model Risk Management (MRM)
Model risk management is a discipline focused on identifying and controlling risks arising from model design, assumptions, and usage. Originally rooted in financial services, it is increasingly applied to AI systems to ensure robustness, transparency, and accountability.


N

NIST AI Risk Management Framework (AI RMF)
The NIST AI Risk Management Framework is a voluntary framework developed in the United States to help organisations manage AI risks systematically. It emphasises governance, mapping, measurement, and risk response across the AI lifecycle.

No-Code Policy Engine
A no-code policy engine allows governance and risk teams to define and enforce AI controls without writing software code. It lowers the barrier to operationalising AI governance by enabling policy changes to be implemented quickly and consistently.


P

Prompt Injection
Prompt injection is a security vulnerability in language-based AI systems where crafted inputs override intended instructions or safeguards. It can lead to data leakage, policy violations, or unsafe outputs if not properly mitigated.


R

Real-Time Monitoring (AI Systems)
Real-time monitoring involves continuously observing AI inputs, outputs, and behaviour in production. It enables early detection of anomalies, policy breaches, and emerging risks, allowing organisations to respond before issues escalate.

Responsible AI
Responsible AI is an approach to designing and deploying AI systems that prioritises fairness, safety, transparency, and accountability. It recognises that technical performance alone is insufficient without social trust and ethical alignment.


T

Transparency
Transparency in AI refers to the visibility of how systems are built, trained, governed, and used. It enables scrutiny by regulators, auditors, and stakeholders, and is foundational to accountability and informed oversight.


U

Use-Case Governance
Use-case governance focuses on controlling how and where AI is applied, rather than governing AI as an abstract capability. It evaluates specific applications based on context, risk, impact, and regulatory exposure, recognising that the same model can be low-risk in one use case and high-risk in another.

Unintended Consequences
Unintended consequences are outcomes produced by AI systems that were not anticipated during design or deployment. These can emerge from feedback loops, scale effects, or interactions with real-world behaviour, and are a key reason why ongoing oversight and monitoring are essential.


V

Validation (AI Systems)
Validation is the process of confirming that an AI system performs as intended, within defined limits, and for its approved purpose. This includes testing for accuracy, robustness, bias, and compliance, both before deployment and throughout the system’s operational life.

Vendor AI Risk
Vendor AI risk arises when organisations rely on third-party AI systems, models, or platforms without sufficient visibility into how they are built or governed. Managing this risk requires due diligence, contractual controls, and ongoing oversight of external AI dependencies.


W

Workflow Integration
Workflow integration refers to embedding AI systems into existing business processes in a controlled and intentional way. Poor integration often creates hidden risk, while well-governed integration ensures accountability, traceability, and alignment with operational realities.

Weak Signals
Weak signals are early indicators that an AI system may be drifting, misbehaving, or creating risk before a failure becomes obvious. Identifying and responding to weak signals is a core capability in mature AI governance and operational resilience.


X

XAI (Explainable Artificial Intelligence)
Explainable Artificial Intelligence refers to techniques and system designs that make AI behaviour interpretable to humans. XAI supports oversight, accountability, and trust by enabling stakeholders to understand not just what an AI system outputs, but why.


Y

Yield Risk (AI Systems)
Yield risk describes the gap between expected value from an AI system and the actual outcomes it produces in practice. Governance helps manage yield risk by ensuring systems are deployed appropriately, monitored continuously, and adjusted when reality diverges from assumptions.


Z

Zero-Trust AI
Zero-trust AI applies the principle of “never assume correctness” to AI systems. It treats model outputs as probabilistic and fallible, requiring verification, controls, and oversight rather than blind reliance, especially in high-impact or regulated environments.

A

Audit Trail (AI Systems)
An audit trail is a structured record capturing how an AI system was designed, trained, configured, deployed, and used over time. It enables traceability of decisions, supports regulatory scrutiny, and allows organisations to investigate incidents, demonstrate accountability, and improve systems based on evidence rather than assumptions.

Adversarial Attacks
Adversarial attacks involve intentionally crafted inputs designed to exploit weaknesses in AI models, causing them to behave incorrectly while appearing to function normally. These attacks expose the fragility of many AI systems and highlight why robustness and security testing are essential components of responsible deployment.

Automated Compliance
Automated compliance refers to the use of technology to continuously enforce governance rules, monitor AI behaviour, and generate evidence for audits without relying solely on manual processes. It helps organisations scale oversight, reduce human error, and ensure controls are applied consistently across AI systems.

AI Compliance
AI compliance is the practice of ensuring that AI systems meet applicable legal, regulatory, and standards-based requirements throughout their lifecycle. This includes obligations related to transparency, accountability, privacy, documentation, and human oversight under frameworks such as the EU AI Act, GDPR, and ISO standards.

AI Security
AI security focuses on protecting AI systems from misuse, manipulation, and compromise. This includes defending against threats such as data poisoning, model theft, adversarial inputs, supply-chain vulnerabilities, and prompt-level attacks, all of which can undermine system integrity and trust.

AI Safety
AI safety addresses whether AI systems behave as intended and avoid causing harm, even in edge cases or unexpected conditions. It is concerned with failure modes such as hallucinations, uncontrolled autonomy, and unreliable outputs, particularly where AI systems influence high-impact decisions.

AI Guardrails
AI guardrails are constraints embedded into systems to limit unsafe, non-compliant, or undesired behaviour at runtime. They may restrict outputs, enforce policy boundaries, require human approval, or trigger intervention when predefined thresholds are crossed, helping translate governance intent into operational control.

AI Risk Management
AI risk management is the ongoing process of identifying, evaluating, mitigating, and monitoring risks introduced by AI systems. These risks may be technical, legal, ethical, or operational, and must be managed continuously as models evolve, data changes, and usage expands.

AI Governance
AI governance is the organisational framework that determines how AI decisions are made, supervised, and challenged. It combines policy, accountability, risk management, and operational controls to ensure AI systems are aligned with regulatory obligations, organisational values, and real-world responsibility.


B

Bias in AI
Bias in AI occurs when systems produce systematically unfair or distorted outcomes due to skewed data, design choices, or contextual misuse. Left unmanaged, bias can create legal exposure, reputational damage, and ethical failures, particularly in systems affecting people’s rights or opportunities.


C

Continuous Validation
Continuous validation is the practice of regularly testing AI systems in live environments to ensure performance, safety, and compliance remain intact over time. It addresses risks such as model drift, emerging bias, and changing data conditions that static testing cannot capture.


D

Data Privacy (in AI)
Data privacy in AI concerns how personal and sensitive data is collected, processed, stored, and used by AI systems. It requires safeguards such as minimisation, access control, anonymisation, and lawful processing to ensure compliance with data protection regulations and maintain user trust.


E

Explainability (XAI)
Explainability refers to the ability to understand and articulate how an AI system arrives at a particular output or recommendation. It is essential for regulatory compliance, internal accountability, and human oversight, especially where decisions have legal or ethical consequences.

EU AI Act
The EU AI Act is the European Union’s comprehensive regulatory framework for artificial intelligence. It classifies AI systems by risk level and imposes escalating obligations on high-risk uses, including governance, documentation, oversight, and post-deployment monitoring.


H

Hallucinations
Hallucinations occur when AI systems generate outputs that are confident and fluent but factually incorrect or fabricated. They represent a fundamental reliability risk, particularly in decision-support, legal, medical, or customer-facing contexts where accuracy matters.

Human Oversight
Human oversight ensures that AI systems remain subject to meaningful supervision and intervention. It defines when humans must review, approve, override, or halt AI-driven actions, preventing blind reliance on automated decisions in critical scenarios.

High-Risk AI Systems
High-risk AI systems are those whose failure or misuse could materially affect safety, fundamental rights, or access to essential services. Regulations treat these systems differently due to their potential impact, requiring stricter governance, validation, and accountability mechanisms.


I

ISO 42001
ISO 42001 is an international management system standard that sets requirements for governing AI responsibly. It provides a structured approach to roles, controls, policies, and continual improvement across the AI lifecycle.


M

Model Drift
Model drift describes the gradual degradation of AI performance caused by changes in data, behaviour, or operating conditions. Without monitoring and intervention, drift can lead to silent failures, compliance breaches, and unreliable decision-making.

Model Lifecycle Management (MLM)
Model lifecycle management governs how AI models are developed, tested, deployed, monitored, updated, and eventually retired. It ensures that models remain fit for purpose and compliant as conditions change.

Model Risk Management (MRM)
Model risk management is a discipline focused on identifying and controlling risks arising from model design, assumptions, and usage. Originally rooted in financial services, it is increasingly applied to AI systems to ensure robustness, transparency, and accountability.


N

NIST AI Risk Management Framework (AI RMF)
The NIST AI Risk Management Framework is a voluntary framework developed in the United States to help organisations manage AI risks systematically. It emphasises governance, mapping, measurement, and risk response across the AI lifecycle.

No-Code Policy Engine
A no-code policy engine allows governance and risk teams to define and enforce AI controls without writing software code. It lowers the barrier to operationalising AI governance by enabling policy changes to be implemented quickly and consistently.


P

Prompt Injection
Prompt injection is a security vulnerability in language-based AI systems where crafted inputs override intended instructions or safeguards. It can lead to data leakage, policy violations, or unsafe outputs if not properly mitigated.


R

Real-Time Monitoring (AI Systems)
Real-time monitoring involves continuously observing AI inputs, outputs, and behaviour in production. It enables early detection of anomalies, policy breaches, and emerging risks, allowing organisations to respond before issues escalate.

Responsible AI
Responsible AI is an approach to designing and deploying AI systems that prioritises fairness, safety, transparency, and accountability. It recognises that technical performance alone is insufficient without social trust and ethical alignment.


T

Transparency
Transparency in AI refers to the visibility of how systems are built, trained, governed, and used. It enables scrutiny by regulators, auditors, and stakeholders, and is foundational to accountability and informed oversight.


U

Use-Case Governance
Use-case governance focuses on controlling how and where AI is applied, rather than governing AI as an abstract capability. It evaluates specific applications based on context, risk, impact, and regulatory exposure, recognising that the same model can be low-risk in one use case and high-risk in another.

Unintended Consequences
Unintended consequences are outcomes produced by AI systems that were not anticipated during design or deployment. These can emerge from feedback loops, scale effects, or interactions with real-world behaviour, and are a key reason why ongoing oversight and monitoring are essential.


V

Validation (AI Systems)
Validation is the process of confirming that an AI system performs as intended, within defined limits, and for its approved purpose. This includes testing for accuracy, robustness, bias, and compliance, both before deployment and throughout the system’s operational life.

Vendor AI Risk
Vendor AI risk arises when organisations rely on third-party AI systems, models, or platforms without sufficient visibility into how they are built or governed. Managing this risk requires due diligence, contractual controls, and ongoing oversight of external AI dependencies.


W

Workflow Integration
Workflow integration refers to embedding AI systems into existing business processes in a controlled and intentional way. Poor integration often creates hidden risk, while well-governed integration ensures accountability, traceability, and alignment with operational realities.

Weak Signals
Weak signals are early indicators that an AI system may be drifting, misbehaving, or creating risk before a failure becomes obvious. Identifying and responding to weak signals is a core capability in mature AI governance and operational resilience.


X

XAI (Explainable Artificial Intelligence)
Explainable Artificial Intelligence refers to techniques and system designs that make AI behaviour interpretable to humans. XAI supports oversight, accountability, and trust by enabling stakeholders to understand not just what an AI system outputs, but why.


Y

Yield Risk (AI Systems)
Yield risk describes the gap between expected value from an AI system and the actual outcomes it produces in practice. Governance helps manage yield risk by ensuring systems are deployed appropriately, monitored continuously, and adjusted when reality diverges from assumptions.


Z

Zero-Trust AI
Zero-trust AI applies the principle of “never assume correctness” to AI systems. It treats model outputs as probabilistic and fallible, requiring verification, controls, and oversight rather than blind reliance, especially in high-impact or regulated environments.

Why VitruvianCo. exists

Why VitruvianCo. exists

At VitruvianCo., we help organisations and senior leaders adopt AI responsibly by focusing on governance, oversight, and decision-making.

Our Offerings

1/ Designing practical AI governance systems that stand up to scrutiny.

2/ Training the humans who must approve, challenge, and defend AI decisions.





Marble bust of a man behind wooden bars.
Marble bust of a man behind wooden bars.

Our Founders

Our Founders

  • Co-Founder

    João Alves

    João brings 7+ years of experience at the intersection of law, philosophy, technology, and applied AI. He trained in Law & Jurisprudence in London, before working inside an early AI startup and later in investment banking, where he learned how professional services and regulated organisations actually make decisions under scrutiny. Most recently, he worked as a GenAI product specialist at AlphaSense, seeing first-hand how companies deploy AI, where governance breaks down, and why leaders struggle to approve AI with confidence. SydeDoor brings those perspectives together into a single, practical approach.

  • Co-Founder

    Alberto Alves

    Alberto brings 30+ years of experience across law, governance, and regulated industries. He has advised founders, executives, investors, and families through complex legal and regulatory environments, and understands where accountability ultimately lands when things go wrong. His strength lies in connecting patterns across disciplines, cycles, and generations, and in knowing precisely where pressure and risk concentrate. SydeDoor reflects that long-view perspective on governance and responsibility.

  • Co-Founder

    João Alves

    João brings 7+ years of experience at the intersection of law, philosophy, technology, and applied AI. He trained in Law & Jurisprudence in London, before working inside an early AI startup and later in investment banking, where he learned how professional services and regulated organisations actually make decisions under scrutiny. Most recently, he worked as a GenAI product specialist at AlphaSense, seeing first-hand how companies deploy AI, where governance breaks down, and why leaders struggle to approve AI with confidence. SydeDoor brings those perspectives together into a single, practical approach.

  • Co-Founder

    Alberto Alves

    Alberto brings 30+ years of experience across law, governance, and regulated industries. He has advised founders, executives, investors, and families through complex legal and regulatory environments, and understands where accountability ultimately lands when things go wrong. His strength lies in connecting patterns across disciplines, cycles, and generations, and in knowing precisely where pressure and risk concentrate. SydeDoor reflects that long-view perspective on governance and responsibility.