← Back to Writing

Architecting Trust: Building an AI Governance Program from Scratch

Published: June 2025 | Topic: AI Governance Implementation

I am beginning the process of creating an AI governance program at my current place of employment. Since I am on a writing streak, I thought it might do well for myself to write out a how-to with alacrity. This will give me something concrete to refer to when my attention is divided. However, I hope you can find this helpful as well. This is an approach for US-based organisations, including NIST guidance. This is a comprehensive approach! Depending on what you're doing with Artificial Intelligence, some things stated may not apply to you. Don't stress, just skip the non-applicable parts and move on.

Now, before we begin, breathe.

Part 1: Laying the Foundation – The Imperative and Initial Vision

The ascent of Artificial Intelligence from niche technology to pervasive societal force has brought with it an urgent, complex challenge: how to govern its development and deployment responsibly.

For any organisation—whether a technology giant, a public sector entity, a financial institution, a healthcare provider, or a small business—the question is no longer if AI governance is needed, but how to build a robust, effective program that ensures AI innovation proceeds hand-in-hand with safety, ethics, and accountability. This essay is the first in a series that will guide you through the process of building an AI Governance program from scratch, a journey that demands foresight, collaboration, and a profound appreciation for the multifaceted nature of truth.

Crucially, this series is premised on a proactive approach: the conscious choice to establish comprehensive AI governance ahead of time, rather than reactively responding to incidents, regulatory pressures, or unforeseen harms. This forward-thinking stance is not just an ethical imperative; it is a strategic necessity for long-term organisational resilience and trustworthiness.

Our core philosophical anchor throughout this series will be the one stated in my philosophical series, the statement: "Everyone has something right. No one has everything right." This principle is not a mere platitude; it is the fundamental insight that must permeate every stage of AI governance construction. It compels us to embrace diverse perspectives, acknowledge the limitations of any single viewpoint, and build systems that actively synthesise partial truths into a more complete, resilient whole. In the context of establishing AI governance, this means recognising that legal teams, engineers, ethicists, business leaders, and even end-users—including those along the supply chain who integrate AI into their products—each hold a vital "something right" about AI's risks, opportunities, and societal impacts. Ignoring any one of these perspectives inevitably leads to a governance framework that is incomplete, ineffective, or even detrimental.

The Imperative for AI Governance: Beyond Reactive Compliance

Before embarking on the "how," it's crucial to firmly establish the "why." Why invest significant time and resources in building an AI Governance program now, proactively and ahead of the curve? The reasons extend far beyond merely reacting to emerging regulations; they touch upon strategic imperative, ethical responsibility, and long-term sustainability.

Defining "AI Governance": What Are We Building?

At its core, an AI Governance program is a comprehensive framework of policies, processes, roles, and technologies designed to guide the responsible and ethical development, deployment, and management of AI systems throughout their lifecycle within an organisation. It is not a one-time project but an ongoing, adaptive discipline. It extends beyond internal AI development to encompass due diligence and oversight for AI integrated from third-party vendors and across the supply chain.

Key characteristics of an effective AI Governance program:

Initial Steps: Laying the Groundwork for Proactive Governance

Building an AI Governance program from scratch is an exercise in organisational design and cultural change. The initial steps are crucial for setting the right tone and direction, embodying the "Everyone has something right. No one has everything right" principle from the very outset, and firmly establishing a proactive foundation.

Conclusion to Part 1

Laying the foundation for an AI Governance program is less about grand declarations and more about meticulous groundwork, always with a proactive mindset. It's about recognising the urgent imperative for responsible AI, understanding the multifaceted nature of AI risks (including those from the supply chain), and most critically, internalising the principle that "Everyone has something right. No one has everything right." By securing executive buy-in, conducting a comprehensive AI landscape assessment, assembling a diverse working group that includes cross-functional and supply chain perspectives, and establishing a clear shared vision for proactive governance, an organisation sets itself on a path to architecting trust rather than reacting to crisis. This initial phase, by fostering a culture of humility and collaboration, builds the very bedrock upon which robust, scalable, and genuinely responsible AI can be built.

In the next essay in this series, we will delve into Part 2: Defining Principles and Policies – The AI Constitution, where we will explore how to translate these initial insights into concrete ethical principles and actionable policies, drawing heavily from structured frameworks like ISO 42001:2023 and the NIST AI Risk Management Framework, to form the enduring "constitution" of your organisation's AI governance program.

AI Transparency Statement: Content developed through AI-assisted research, editing, and some enhancement. All analysis, frameworks, and insights reflect my professional expertise and judgment.