← Back to Writing

Architecting Trust: Building an AI Governance Program from Scratch - Part 2

Published: June 2025 | Topic: AI Governance Implementation

Part 2: Defining Principles and Policies – The AI Constitution

In Part 1 of this series, we laid the critical groundwork for building a proactive AI Governance program. We established the compelling "why"—mitigating unprecedented risks, navigating a structuring regulatory landscape (like ISO 42001 and NIST AI RMF), building trust, fostering responsible innovation, and attracting talent—and took the initial steps of securing executive buy-in, assessing the AI landscape (including third-party dependencies), and forming a multi-disciplinary working group. Crucially, we committed to a proactive stance, acting ahead of time rather than reactively, and anchored our entire endeavour in the omni-directional truth: "Everyone has something right. No one has everything right."

Now, in Part 2, we embark on the most fundamental phase: translating that initial vision and assessment into a concrete set of ethical principles and actionable policies. This is the stage where we begin to draft your organisation's AI Constitution – a foundational document that will guide every AI-related decision, ensuring responsible AI scalability. It is here that the synthesis of frameworks like ISO 42001:2023 (AI Management System) and the NIST AI Risk Management Framework becomes indispensable, providing a structured approach to identifying and codifying these core principles and policies.

The Need for an AI Constitution: Principles as the Moral Compass

Before drafting a single policy, an organisation must articulate its core ethical principles for AI. These principles serve as the AI program's moral compass, guiding decision-making in ambiguous situations where rigid rules might not apply. They are the high-level statements of commitment that reflect an organisation's values in the context of AI.

This foundational step is where "Everyone has something right. No one has everything right" truly comes alive. No single functional team (e.g., legal, engineering, ethics) or external framework (e.g., ISO, NIST) possesses the entire truth about what constitutes responsible AI for your specific organisation. Each contributes a vital "something right":

The AI Constitution, therefore, is not dictated from above or copied verbatim from an external source. It is co-created through a deliberate process of synthesis, discussion, and consensus-building among these diverse stakeholders. This proactive co-creation ensures that the principles are not just aspirational but are deeply understood, widely accepted, and practically applicable throughout the organisation, extending to its interactions with its supply chain.

Common ethical AI principles, often seen as cornerstones, include:

Synthesising ISO 42001 and NIST AI RMF for Principle Definition

Leveraging established frameworks like ISO 42001 and NIST AI RMF is not about simply "checking boxes"; it's about drawing on the collective "something right" of global experts to inform your organisation's unique AI Constitution. These frameworks provide a structured vocabulary and a comprehensive scope that prevents overlooking critical ethical or risk dimensions.

ISO 42001:2023 - The Management System Perspective

ISO 42001:2023, "Information technology — Artificial intelligence — Management system," offers a management system approach to responsible AI. It provides a certifiable framework for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS). Its requirements inherently guide the definition of principles:

ISO 42001 encourages a systematic approach to embedding ethics throughout the AI lifecycle, pushing organisations to formalise their commitment to principles into a manageable system. Its emphasis on organisational context means it acknowledges "Everyone has something right" about their own operational nuances, while providing a common framework (no one has everything right, so let's follow a standard).

NIST AI RMF - The Risk Management Perspective

The NIST AI Risk Management Framework (AI RMF 1.0) provides a flexible, comprehensive framework for managing AI risks. Its core functions—Govern, Map, Measure, Manage—and underlying categories naturally prompt the articulation of principles and subsequent policy development:

NIST AI RMF offers a detailed taxonomy of AI risks and characteristics, providing a robust checklist for ensuring your principles cover key areas like explainability, privacy, robustness, and safety. Its emphasis on continuous iteration and learning reinforces that "No one has everything right" initially, but collective, ongoing effort can improve.

Synthesis in Practice:

When defining principles, your working group can draw upon the high-level ethical values from ISO 42001's management system approach (e.g., top management commitment to AI policy) and cross-reference them with the detailed risk categories and characteristics outlined in NIST AI RMF (e.g., specific subcategories for "Fairness" like "Disparate Impact" or "Bias Mitigation"). This ensures principles are both broadly aspirational and grounded in concrete risk considerations. For instance, an ISO-inspired principle on "AI Accountability" could be elaborated using NIST's "Manage" function, detailing who is responsible for specific risk responses, thereby making the principle more actionable.

Translating Principles into Policies: The AI Constitution in Action

Once the core ethical principles are defined, the next crucial step is to translate them into actionable, enforceable policies, procedures, and controls. These are the "laws" of your AI Constitution, providing clear guidance for AI development, deployment, and management across the organisation and throughout its supply chain. These policies embody the "something right" of each functional area and ensure a cohesive approach, reminding us that "No one has everything right" if they act in isolation.

Here are key policy areas to consider proactively:

Organisational-Wide AI Use Policy:

  • Purpose: Establishes the overarching rules for AI adoption.
  • Content: Defines what constitutes AI within the organisation, prohibits malicious or unethical uses (e.g., illegal surveillance, autonomous weapons), mandates ethical review processes for all AI projects, and clarifies accountability structures.
  • Proactive Element: Explicitly states the organisation's commitment to responsible AI as a strategic differentiator, not just a compliance burden. It includes guidelines for reviewing third-party AI solutions before integration.

AI Risk Management and Assessment Policy (Leveraging NIST AI RMF):

  • Purpose: Details the systematic process for identifying, assessing, mitigating, and monitoring AI risks throughout its lifecycle.
  • Content: Mandates the use of a framework like NIST AI RMF's Govern, Map, Measure, Manage functions. Defines thresholds for risk acceptability, outlines the methodology for risk assessment (e.g., impact on human rights, safety, privacy), and requires regular risk reviews. This policy should also include requirements for third-party AI risk assessment, including due diligence on vendor's AI governance, security practices, and contractual clauses for liability and data use.
  • Proactive Element: Integrates risk assessment early in the AI project lifecycle (from ideation to procurement), making it a prerequisite for advancement, not an afterthought.

Data Governance Policy for AI (Bias, Privacy, Security):

  • Purpose: Ensures that data used for AI development and deployment is high-quality, relevant, unbiased, privacy-protected, and secure.
  • Content: Specifies rules for data collection, storage, retention, anonymisation, and access control. Requires bias detection and mitigation strategies for training data. Outlines data lineage and provenance requirements. This policy is critical for managing data risks associated with third-party AI models which may have been trained on vast, potentially unvetted datasets.
  • Proactive Element: Mandates "privacy-by-design" and "ethics-by-design" principles for data pipelines, ensuring that ethical considerations are built in from the ground up rather than bolted on later.

AI Model Development and Validation Policy:

  • Purpose: Guides the technical development lifecycle of AI models to ensure robustness, reliability, and fairness.
  • Content: Defines requirements for model design documentation, testing methodologies (e.g., adversarial testing, fairness testing across demographic groups), validation processes, performance metrics, and version control. Specifies the need for human-in-the-loop design where appropriate.
  • Proactive Element: Requires rigorous, multi-faceted testing before deployment, anticipating potential failures and biases.

AI Deployment, Monitoring, and Operations Policy (Aligned with ISO 42001 Operation):

  • Purpose: Ensures AI systems are deployed safely, monitored continuously for performance drift and unintended consequences, and managed responsibly in production.
  • Content: Defines procedures for phased rollouts, continuous monitoring for bias, performance decay, and security vulnerabilities. Establishes clear incident response plans for AI failures or ethical breaches. Requires human oversight mechanisms and clear escalation paths. This extends to monitoring the performance and ethical adherence of third-party AI systems integrated into your operations.
  • Proactive Element: Mandates "AI observability" – designing systems with built-in mechanisms for continuous ethical and performance monitoring.

Transparency, Explainability, and Communication Policy:

  • Purpose: Determines how and when AI systems' decisions are explained to users and stakeholders.
  • Content: Differentiates between various levels of explainability (e.g., technical explanation for engineers vs. simplified explanation for end-users). Sets guidelines for communicating AI capabilities, limitations, and potential impacts to affected parties. Defines disclosure requirements for AI-generated content.
  • Proactive Element: Requires that explainability considerations are part of the initial design phase for all relevant AI systems, making it a feature, not a retrofitted explanation.

Third-Party AI and Supply Chain Policy:

  • Purpose: Specifically addresses the risks and governance requirements for AI solutions acquired from external vendors or integrated into the supply chain.
  • Content: Mandates AI-specific due diligence processes for vendors (assessing their AI governance, risk management, and ethical principles). Requires contractual agreements that specify ethical AI clauses, audit rights, data use limitations, and liability. Establishes ongoing monitoring of third-party AI performance and compliance.
  • Proactive Element: Integrates AI risk assessment into the standard vendor selection and procurement process before contracts are signed, ensuring that your organisation only onboards partners committed to similar responsible AI principles.

Operationalising the "AI Constitution": Initial Policy Development Steps

The process of drafting these policies, driven by the principles of ISO 42001 and NIST AI RMF, requires a structured approach:

Challenges in Defining the AI Constitution

Even with frameworks, defining the AI Constitution presents challenges:

Conclusion to Part 2

Defining your organisation's AI Constitution—its core principles and actionable policies—is the bedrock of a truly responsible and scalable AI Governance program. By proactively synthesising insights from global frameworks like ISO 42001 and NIST AI RMF, and by consciously embracing the truth that "Everyone has something right. No one has everything right," organisations can craft a living document that guides ethical AI development, mitigates risks (including those from the supply chain), and builds enduring trust. This phase transitions abstract intentions into concrete guidelines, providing the essential "laws" for your AI ecosystem.

In Part 3: Operationalising Governance – Risk Management and Implementation, we will move from policy definition to practical execution, delving into how to embed these principles and policies into daily operations, establish robust risk management processes, and ensure continuous monitoring and improvement of your AI governance framework.

AI Transparency Statement: Content developed through AI-assisted research, editing, and some enhancement. All analysis, frameworks, and insights reflect my professional expertise and judgment.