AI Governance by Design: Integration Throughout the Full Lifecycle

1. Introduction

Much has already been said and written about AI Governance, and there is no single, universally accepted framework. Organizations do not necessarily struggle with the absence of AI governance frameworks, but rather with translating these frameworks into their own organizational context. In addition, artificial intelligence is developing rapidly, which means that new risks continue to emerge and must be identified.

The AI Act (AIA) contains various obligations and rules that can be categorized into three dimensions that are of great importance for AI governance. These three dimensions form the substantive pillars of Trustworthy AI. This concept refers to ensuring the ethical, legal, and technical reliability of AI systems so that they operate safely, transparently, and in accordance with fundamental rights. The pillars are:

  1. Ethics – The Ethics Guidelines for Trustworthy AI and the Assessment List for Trustworthy AI developed by the High-Level Expert Group (HLEG) form the guiding principles for the ethical framework.
  2. Legal frameworks – Within Europe, a legal framework has been established that consists not only of the AIA but also of additional AI-related legislation, such as liability rules for AI and products (EU Product Liability Directive). In addition, other relevant legislation applies in areas such as:
    • Data (e.g., Data Governance Act)
    • Operational resilience and cybersecurity (Digital Operational Resilience Act, NIS II, Cyber Resilience Act)
    • Platforms (Digital Markets Act, Digital Services Act)
    • Content (e.g., European Media Freedom Act)
  1. Robustness and safety – Trustworthy AI is largely ensured by improving the reliability, predictability, and integrity of this technology.

Numerous frameworks, standards, and principles exist that provide further interpretation of these dimensions, each with their own advantages and disadvantages. Different ethical frameworks also exist, partly depending on the geographical region. Examples include India’s Digital Personal Data Protection Bill, the U.S. AI Bill of Rights, and China’s Measures for the Management of Generative Artificial Intelligence Services. In addition, ISO/IEC 42001 is not always a suitable solution for Small and Medium Enterprises (SMEs).[1]

In this blog, I provide several tips and insights for establishing an AI governance framework. I also outline several key principles for designing such a framework. In doing so, I take various aspects of the AI lifecycle as the starting point.

 

2. AI Lifecycle & Modules

The AIA is naturally of great importance for an organization’s compliance obligations and the AI-related risks that accompany them. However, I argue that it is also important to derive governance from the AI lifecycle itself.

Based on the AI lifecycle, governance can be divided into several modules, which can in turn be subdivided into sub-modules. Broadly speaking, the AI lifecycle can be divided into four categories:

  1. Core Governance
  2. Risk Identification, Data Governance & Feature Engineering
  3. Training
  4. Deployment, Escalation & Ending

Based on these four categories, I describe several modules that are essential for effective AI governance.

 

2.1 Core Governance

By Core Governance, I refer to the fundamental governance structures that apply across the entire organization. Key questions include: What should governance regulate? Who is responsible for what? How is governance implemented? And at what stages does governance apply within the AI lifecycle? [2]

At the strategic level, this may include:

  • Aligning the organization’s mission, vision, risk appetite and ethical values with AI objectives
  • Establishing or redesigning a policy framework
  • AI literacy
  • Allocation of roles and responsibilities
  • Compliance and risk management
  • Cybersecurity and AI security
  • AI supply chain governance

At the operational level, these topics can be further elaborated through procedures and processes. For example, AI literacy can be developed by tailoring awareness and training programs to specific roles within the organization. Another practical approach is the use of Use Case Cards to document the objectives of specific AI systems for particular departments or processes.

 

2.2 Risk Identification, Data Governance & Feature Engineering

This category includes all activities that take place prior to training an AI model. These may include:

  • Data Governance — the strategic and normative framework for managing data within the organization
  • Data Management — cleaning data, labeling data, assembling datasets, securing data, etc.
  • Risk assessments and risk identification related to:
    • Ethical assessments (Fundamental Rights Impact Assessment, ALTAI, or regionally applicable ethical frameworks)
    • Privacy and data protection (Data Protection Impact Assessment)
    • Intellectual property
    • Database ownership (Database Directive)
    • Cybersecurity and AI security
  • Feature engineering — transforming raw data into usable input variables for machine learning.

 

2.3 Training

This category consists of the phase in which an AI model is trained and evaluated. Training and the associated obligations can be incorporated into a separate governance module that aligns with the AIA requirements for robustness and consistency. This may include defining thresholds for: Accuracy, Bias metrics, Explainability requirements, Logging and traceability It is also important to document why specific thresholds were chosen.

 

2.4 Deployment, Escalation and Ending

The final category in the AI lifecycle can be divided into the sub-modules deployment, monitoring, and maintenance. Once an AI model or system has been deployed, organizations must implement control mechanisms to monitor its actions and behavior and intervene where necessary. This may include detection and oversight mechanisms, as well as reporting and transparency obligations such as post-market monitoring or incident reporting.

In exceptional cases where an AI system does not function properly or becomes harmful, an organization must have a plan to terminate the AI model. An extreme high-risk scenario in this context is the phenomenon of “loss of control” in highly autonomous AI systems.[3] This situation arises when an AI system no longer follows human instructions and there is no immediate way to regain control. Such a scenario may occur if the AI uses tactics such as deception, manipulation, or self-preservation, or when human oversight fails due to excessive reliance on automated systems.

 

3. Principles of AI Governance

Finally, when establishing AI governance, it is important to consider several guiding principles.

 

3.1 Risk-Based Governance

First, it is important to align with the characterization of the AIA, specifically its risk-based approach. The higher the AI classification, the more extensive the control measures that must be implemented. For example, the more complex an AI model is, the more robust and appropriate governance measures should be expected. Establishing AI governance largely revolves around managing the various risks associated with developing, implementing, and using AI systems. The themes of the AIA already define much of the scope for risk analysis, which can therefore be described as a form of risk governance.[4]

 

3.2 Control-Oriented and Process-Oriented Governance

Some standards, frameworks, and regulations focus strongly on Governance-by-Audit, while others focus on Governance-by-Design.[5]

Governance-by-Audit emphasizes governance ensured through periodic control, review, and ex-post evaluation. Governance is embedded in internal or external control mechanisms (for example, AIA conformity assessments or ISO/IEC 42001 certification).

Governance-by-Design means that governance principles such as compliance, ethics, risk management, and transparency are structurally embedded in systems, processes, and architectures from the outset (for example ISO/IEC 42001 or the NIST AI Risk Management Framework).

For organizations establishing AI governance, it is advisable to combine both Governance-by-Audit and Governance-by-Design so that the framework remains both auditable and adaptable over time.[6]

 

3.3 No One-Size-Fits-All Approach

As noted earlier, there is no single framework for AI governance. It is therefore important to consider the characteristics of the organization itself. What type of organization is it? In which sector does it operate? Is it a highly regulated sector? Is the organization an SME? One of the most important questions is: what capabilities does the organization have to meet compliance obligations and effectively manage AI risks?

 

3.4 Preparing for Future Technological Developments

Technological developments in AI are continuously evolving. One example is AI models with a very high degree of autonomy. Organizations can anticipate and prepare for these developments by defining triggers or milestones. When an AI development reaches such a trigger or milestone, a specific governance sub-module can automatically be activated.[7] If the trigger or milestone is not reached, the sub-module can be deactivated.

For high-risk AI systems, but also for future innovations, organizations may define fairness metrics such as equal opportunity or disparate impact ratio, including predefined thresholds and documentation of potential trade-offs. For organizations, it is important to clearly define what “fairness” means, why a particular definition was chosen, and in which context this definition applies.

This approach is particularly useful for AI experimentation within organizations, as it allows temporary governance mechanisms to be implemented that proactively identify risks for next-generation AI models and systems. It can also serve as a structured implementation plan and provide insight into an organization’s risk management approach for potential regulators. Finally, such temporary governance sub-modules may serve as a foundation for scaling an AI experiment into a fully operational AI application.

 

 

Future-proof AI requires structural governance. Start today by implementing a lifecycle-driven approach that manages risks while building trust. Contact CIRMCA.

 

 

[1] W. Finch, M. Butt, Gaps in AI-compliant complementary governance frameworks’ suitability (for low-capacity actors), and structural asymmetries (in the compliance ecosystem)- A systematic review, Journal of Cybersecurity and Privacy 2025, 5, 101, p.17.

[2] A. Batool, D. Zowghi & M. Bano, AI Governance: a systematic literature review, Springer 14 January 2025, p. 1.

[3] https://lordslibrary.parliament.uk/potential-future-risks-from-autonomous-ai-systems/

https://centerforhumanetechnology.substack.com/p/why-loss-of-control-is-not-science

[4] E. Hohma e.a., Investigating accountability for Artificial Intelligence through risk governance: A workshop-based exploratory study, Frontiers in Psychology 25 January 2023, p. 3

[5] W. Finch, M. Butt, Gaps in AI-compliant complementary governance frameworks’ suitability (for low-capacity actors), and structural asymmetries (in the compliance ecosystem)- A systematic review, Journal of Cybersecurity and Privacy 2025, 5, 101, p. 15 and 16.

https://www.forbes.com/councils/forbestechcouncil/2026/01/30/governance-by-design-how-to-engineer-trust-in-the-age-of-ai/

[6] W. Finch, M. Butt, Gaps in AI-compliant complementary governance frameworks’ suitability (for low-capacity actors), and structural asymmetries (in the compliance ecosystem)- A systematic review, Journal of Cybersecurity and Privacy 2025, 5, 101, p. 15 and 16.

[7] Y. Bengio e.a., Managing extreme AI risks amid rapid progress, Science 24 May 2024, p. 4.

 

 

 

 

 

 

 

Translate »

You cannot copy content of this page