I am beginning the process of creating an AI governance program at my current place of employment. Since I am on a writing streak, I thought it might do well for myself to write out a how-to with alacrity. This will give me something concrete to refer to when my attention is divided. However, I hope you can find this helpful as well. This is an approach for US-based organisations, including NIST guidance. This is a comprehensive approach! Depending on what you're doing with Artificial Intelligence, some things stated may not apply to you. Don't stress, just skip the non-applicable parts and move on.
Part 1: Laying the Foundation – The Imperative and Initial Vision
The ascent of Artificial Intelligence from niche technology to pervasive societal force has brought with it an urgent, complex challenge: how to govern its development and deployment responsibly.
For any organisation—whether a technology giant, a public sector entity, a financial institution, a healthcare provider, or a small business—the question is no longer if AI governance is needed, but how to build a robust, effective program that ensures AI innovation proceeds hand-in-hand with safety, ethics, and accountability. This essay is the first in a series that will guide you through the process of building an AI Governance program from scratch, a journey that demands foresight, collaboration, and a profound appreciation for the multifaceted nature of truth.
Crucially, this series is premised on a proactive approach: the conscious choice to establish comprehensive AI governance ahead of time, rather than reactively responding to incidents, regulatory pressures, or unforeseen harms. This forward-thinking stance is not just an ethical imperative; it is a strategic necessity for long-term organisational resilience and trustworthiness.
Our core philosophical anchor throughout this series will be the one stated in my philosophical series, the statement: "Everyone has something right. No one has everything right." This principle is not a mere platitude; it is the fundamental insight that must permeate every stage of AI governance construction. It compels us to embrace diverse perspectives, acknowledge the limitations of any single viewpoint, and build systems that actively synthesise partial truths into a more complete, resilient whole. In the context of establishing AI governance, this means recognising that legal teams, engineers, ethicists, business leaders, and even end-users—including those along the supply chain who integrate AI into their products—each hold a vital "something right" about AI's risks, opportunities, and societal impacts. Ignoring any one of these perspectives inevitably leads to a governance framework that is incomplete, ineffective, or even detrimental.
The Imperative for AI Governance: Beyond Reactive Compliance
Before embarking on the "how," it's crucial to firmly establish the "why." Why invest significant time and resources in building an AI Governance program now, proactively and ahead of the curve? The reasons extend far beyond merely reacting to emerging regulations; they touch upon strategic imperative, ethical responsibility, and long-term sustainability.
- Mitigating Unprecedented and Evolving Risks: AI introduces novel and complex risks that traditional risk management frameworks may not adequately address. These include:
- Bias and Discrimination: AI systems can inadvertently (or deliberately) perpetuate and amplify societal biases, leading to unfair outcomes.
- Lack of Transparency and Explainability: Many advanced AI models operate as "black boxes," hindering accountability, auditing, and problem diagnosis.
- Privacy Violations: Mismanagement, misuse, or security breaches of sensitive personal data processed by AI systems pose significant privacy risks, especially as AI integrates more deeply into daily operations and across interconnected systems like supply chains.
- Safety and Reliability Failures: In critical applications (e.g., autonomous vehicles, medical diagnostics), AI failures can lead to physical harm, financial loss, or systemic disruption, whether the AI is developed in-house or integrated from a third-party vendor.
- Misinformation and Manipulation: Generative AI capabilities can be weaponised to create persuasive deepfakes, propaganda, and disinformation at scale, threatening democratic processes and social cohesion.
- Ethical Dilemmas: AI forces difficult ethical trade-offs, such as balancing efficiency with human autonomy, or innovation with job displacement. Without governance, these dilemmas are often resolved implicitly, without ethical deliberation.
- Supply Chain and Third-Party AI Risks: As AI is increasingly embedded in commercial software and services, an organisation inherits the risks associated with their AI governance. Proactive governance must extend to vetting and managing these external AI dependencies.
- Navigating a Rapidly Evolving and Structuring Regulatory Landscape: Governments worldwide are quickly moving from abstract ethical guidelines to concrete laws and regulations governing AI. Crucially, leading organisations and regulators are coalescing around structured, auditable frameworks like ISO 42001:2023 (AI Management System) and the NIST AI Risk Management Framework (AI RMF). Proactively building a governance program aligned with these comprehensive standards positions organisations to:
- Achieve compliance efficiently, avoiding costly reactive remediation and fines.
- Gain a competitive advantage by demonstrating a verifiable commitment to responsible AI.
- Participate in shaping the future regulatory environment rather than merely being subjected to it. Proactive adoption anticipates future legal "rights" that societies are attempting to codify, ensuring readiness.
- Building Enduring Trust and Reputation: In an era of increasing public scrutiny, an organisation's demonstrable commitment to responsible AI is a significant differentiator. Proactively establishing clear ethical guardrails and transparent processes builds deep trust with customers, investors, employees, and the broader public. Conversely, high-profile AI failures or ethical missteps, whether from internal AI or integrated third-party AI, can severely damage reputation and erode public confidence, leading to significant financial and brand repercussions. Trust, built through deliberate proactive steps, is paramount for successful adoption and scalable use of AI.
- Fostering Responsible Innovation with guardrails: Governance is not meant to stifle innovation but to guide it responsibly. By establishing clear guardrails, ethical principles, and risk management processes upfront, organisations empower their developers and product teams to innovate within known, safe boundaries. This proactive approach reduces the likelihood of costly retrospective remediation, public backlash, or having to roll back features, channelling the "something right" of innovation towards beneficial, rather than harmful, outcomes.
- Attracting and Retaining Top Talent: Top AI talent increasingly seeks to work for organisations that prioritise ethical considerations and responsible development. A strong, proactively built AI Governance program signals a genuine commitment to these values, making an organisation more attractive to ethical engineers, researchers, and responsible business leaders.
Defining "AI Governance": What Are We Building?
At its core, an AI Governance program is a comprehensive framework of policies, processes, roles, and technologies designed to guide the responsible and ethical development, deployment, and management of AI systems throughout their lifecycle within an organisation. It is not a one-time project but an ongoing, adaptive discipline. It extends beyond internal AI development to encompass due diligence and oversight for AI integrated from third-party vendors and across the supply chain.
Key characteristics of an effective AI Governance program:
- Holistic: It covers the entire AI lifecycle, from ideation and data acquisition to model deployment, monitoring, and eventual decommissioning, and critically, extends to managing third-party AI dependencies.
- Multi-disciplinary: It requires collaboration across legal, ethics, engineering, product, data science, risk, security, procurement, and business units.
- Proactive: It aims to embed responsibility from the design phase and throughout the procurement process, rather than merely reacting to problems after they emerge.
- Adaptive: It must be flexible enough to evolve with technological advancements, new risks, and changing regulatory environments (e.g., updates to ISO 42001 or NIST AI RMF).
- Accountable: It establishes clear lines of responsibility for AI systems and their impacts, whether developed internally or sourced externally.
- Transparent (where appropriate): It aims to provide clarity on how AI systems function and how decisions are made, commensurate with context and risk.
Initial Steps: Laying the Groundwork for Proactive Governance
Building an AI Governance program from scratch is an exercise in organisational design and cultural change. The initial steps are crucial for setting the right tone and direction, embodying the "Everyone has something right. No one has everything right" principle from the very outset, and firmly establishing a proactive foundation.
- Secure Executive Buy-in and Sponsorship:
- Why: AI governance cannot succeed without strong leadership support. It impacts multiple departments, requires resource allocation, and may necessitate shifts in existing processes and vendor relationships. A proactive approach must start from the top.
- How: Develop a compelling business case outlining the risks of inaction (reputational damage, regulatory fines, loss of trust, supply chain vulnerabilities) and the immense benefits of proactive governance (competitive advantage, responsible innovation, enduring trust, market leadership). Identify a senior executive (e.g., CIO, CTO, Legal Counsel, Chief Risk Officer) to champion the initiative.
- "Everyone has something right": The executive leadership holds the "something right" of strategic vision and resource allocation; without their proactive buy-in, even the best ethical intentions from engineers won't scale or influence the supply chain effectively.
- Conduct a Comprehensive Initial AI Landscape Assessment:
- Why: You can't govern what you don't understand, especially when AI proliferates both internally and externally. A proactive approach requires a full map.
- How: Inventory all AI projects – those in production, pilot, or R&D within your organisation, and crucially, identify all third-party software and services that incorporate AI, particularly those critical to your supply chain or core operations. For each, gather basic information: purpose, data used, key stakeholders (internal and external), potential risks identified, and anticipated impact.
- "Everyone has something right": Business units, technical teams, and even procurement/vendor management teams hold the "something right" of actual AI use cases and operational realities. This assessment is about gathering their dispersed "rights" into a comprehensive, proactive picture, identifying AI wherever it touches your organisation.
- Identify Key Stakeholders and Form a Multi-Disciplinary Working Group:
- Why: AI governance is inherently interdisciplinary and extends beyond your organisational walls through your supply chain. Excluding key voices at the outset leads to blind spots and resistance later. A proactive stance means getting these diverse perspectives involved early.
- How: Bring together representatives from:
- Legal & Compliance: For regulatory interpretation (including ISO 42001/NIST alignment) and risk mitigation.
- Ethics (if available) / HR: For societal impact, fairness, and human-centric design.
- Data Science / Engineering: For technical feasibility, capabilities, and limitations.
- Product / Business Units: For use cases, customer impact, and strategic alignment.
- Risk Management / Internal Audit: For integrating AI risks into existing enterprise risk frameworks (including third-party risk).
- Security: For data and model security.
- Procurement / Vendor Management: Crucial for assessing and influencing AI governance in the supply chain.
- "Everyone has something right": Each of these groups possesses a critical piece of the puzzle. The working group's initial purpose is to collectively gather these "something rights" and begin to form a shared understanding that is holistic and proactive. The inverse, "No one has everything right," immediately becomes apparent as these diverse perspectives reveal gaps in individual knowledge and highlight the need for collective wisdom.
- Establish a Shared Vision and Scope, Rooted in Proactivity:
- Why: Without a common understanding of what AI governance means for your organisation and what it aims to achieve, efforts will be fragmented. This vision must explicitly state the commitment to proactive governance.
- How: Facilitate workshops with the initial working group to:
- Define the core purpose and goals of the AI Governance program, explicitly stating the commitment to being proactive and a leader in responsible AI.
- Identify key ethical principles that resonate with the organisation's values and mission.
- Determine the initial scope: Will it apply to all AI systems immediately? Only high-risk ones? What geographical regions? Crucially, how will it address AI embedded in third-party products and services?
- Agree on a preliminary roadmap and timelines for a proactive, phased implementation.
- "Everyone has something right": This collaborative process begins the journey of synthesising disparate "something rights" into a coherent, shared "right" for the organisation. It's a foundational exercise in collective intelligence, aiming to build a future, not merely react to the present.
Conclusion to Part 1
Laying the foundation for an AI Governance program is less about grand declarations and more about meticulous groundwork, always with a proactive mindset. It's about recognising the urgent imperative for responsible AI, understanding the multifaceted nature of AI risks (including those from the supply chain), and most critically, internalising the principle that "Everyone has something right. No one has everything right." By securing executive buy-in, conducting a comprehensive AI landscape assessment, assembling a diverse working group that includes cross-functional and supply chain perspectives, and establishing a clear shared vision for proactive governance, an organisation sets itself on a path to architecting trust rather than reacting to crisis. This initial phase, by fostering a culture of humility and collaboration, builds the very bedrock upon which robust, scalable, and genuinely responsible AI can be built.
In the next essay in this series, we will delve into Part 2: Defining Principles and Policies – The AI Constitution, where we will explore how to translate these initial insights into concrete ethical principles and actionable policies, drawing heavily from structured frameworks like ISO 42001:2023 and the NIST AI Risk Management Framework, to form the enduring "constitution" of your organisation's AI governance program.