This is essay three in a five-part series on the philosophical progression of AI governance.
In our previous examination, we concluded that the elegant, rigid framework of Asimov's Three Laws has found its logical successor in the flexible, learnable framework of Constitutional AI. By training models on a set of human-authored principles, we have seemingly resolved the core challenge of AI safety, evolving from hard-coded commands to internalized values. This technological solution, however, gives rise to a profoundly political problem, one that is far more complex and fraught with peril than any logical paradox Asimov imagined. If the soul of these new machines will be a constitution, the defining question of the 21st century becomes: who gets to be the framer?
We are, whether we recognize it or not, in the midst of a global Constitutional Convention for Artificial Intelligence. It is a decentralized, undeclared, and fiercely contested battle of ideologies, waged not in a single assembly hall but in corporate boardrooms, government regulatory bodies, and research labs across the planet. The stakes are monumental. The values encoded into these systems will shape global culture, define the boundaries of individual freedom, influence economic outcomes, and mediate our very perception of reality. To control the constitution is to wield a new and unprecedented form of power.
This is not merely a debate over technical standards; it is a struggle to codify a vision of humanity. The central conflict of our age will not be fought over the silicon chips, but over the philosophical principles that guide them. Examining the key factions in this struggle—the founding corporations of Silicon Valley, the regulatory bloc of the European Union, the sovereign-minded state of China, and the market-driven approach of the United States—reveals a deep global schism, a fundamental disagreement on what constitutes a "good" and "safe" artificial intelligence. The outcome of this contest will determine whether AI becomes a tool for universal empowerment or the most effective instrument of ideological control ever conceived.
Part I: The Corporate Founders and the De Facto Constitution
Before governments could formulate a coherent response to the Cambrian explosion of generative AI, a group of private companies in Northern California had already drafted the first governing documents. Corporations like Google, Meta, and particularly Anthropic, the pioneer of the constitutional approach, became the de facto framers of AI ethics. Their initial constitutions were not born from a grand philosophical debate, but from a pragmatic and defensive blend of corporate values, legal necessities, and engineering realities. These foundational documents are a tapestry woven from three distinct threads: public relations, legal liability, and a specific, localized worldview.
The first thread is the set of publicly stated AI principles. These are the polished, aspirational pillars displayed on corporate websites, promising that AI will be socially beneficial, avoid creating unfair bias, be accountable to people, and uphold high standards of scientific excellence. These principles are the equivalent of a preamble—noble in intent, broad in scope, and designed to reassure the public and preempt regulatory scrutiny. They are crucial for building trust, but their vagueness often renders them operationally inert. The promise to "avoid unfair bias" is a laudable goal, but it provides little concrete guidance to an engineer faced with a dataset that reflects centuries of societal inequality.
The second, more rigid thread is legal compliance and Terms of Service. This part of the constitution is written not by philosophers, but by lawyers. Its primary function is not to promote human flourishing but to mitigate corporate risk. It proscribes the generation of content that is illegal, infringes on copyright, or could be used for harassment, hate speech, and other legally actionable offenses. This framework is essential for any product operating at scale, but it is fundamentally reactive and defensive. It defines what an AI should not do to avoid a lawsuit, not what it should do to be a positive force in the world.
The third and most influential thread is the specific, often unstated, cultural worldview of its creators. The early AI constitutions were written in English, by teams based primarily in the United States, steeped in the liberal, individualistic, and market-oriented values of Silicon Valley. The resulting AI models, therefore, often promote a specific set of norms under the guise of neutrality. They tend to be highly sensitive to Western-centric cultural issues while potentially being naive or dismissive of norms and values from other parts of the world. An AI trained on this model might skillfully navigate a complex query about gender identity in North America but offer a simplistic or culturally inappropriate response to a question about caste systems in India or familial obligations in East Asia. This is not a malicious act of cultural imperialism, but an inevitable byproduct of a homogenous founding group. The attempt to create a universal, "view from nowhere" AI inadvertently baked in the "view from Northern California," establishing a powerful and often invisible baseline for what is considered a "safe" or "appropriate" response.
This corporate-led model of governance, while expedient, suffers from a profound lack of democratic legitimacy. Key decisions about what constitutes harmful speech, political fairness, or historical truth are made in private by a small, unelected group of technologists and executives whose primary fiduciary duty is to their shareholders, not the global public. They are effectively performing a function of public governance—setting the boundaries of acceptable discourse for a global communication tool—without any of the accountability, transparency, or public participation that such a role demands. The corporate founders have given AI its first constitution, but it is a document written by the few, for the protection of the few, that governs the many.
Part II: The Nation-State Blocs and the Geopolitics of Values
As the strategic importance of AI has become undeniable, nation-states have begun to push back against the de facto governance of corporations, asserting their sovereign right to regulate this new domain. This has led to the emergence of distinct ideological blocs, each seeking to impose its own constitutional vision. This geopolitical contest has fractured the dream of a single, universal AI, replacing it with a "splinternet" of values.
The European Union: The Rights-Based Regulators
The EU has taken the most assertive and comprehensive approach to AI governance. Its landmark AI Act is a bold attempt to create a legal, rights-based constitution for any AI system operating within its vast market. The EU's values are explicit and deeply rooted in its post-war political tradition: the primacy of fundamental human rights, individual dignity, data privacy (an extension of the GDPR philosophy), and robust consumer protection. The AI Act takes a risk-based approach, categorizing AI systems and imposing strict obligations on those deemed "high-risk," such as those used in law enforcement, critical infrastructure, or employment.
The EU's constitution is one of legal mandate and enforced transparency. It demands conformity assessments, clear documentation, and human oversight. It seeks to subordinate the logic of the market to the rights of the citizen. The goal is to force corporations, regardless of their country of origin, to redesign their systems to comply with European norms. This approach is powerful and has the potential to set a global standard—the "Brussels Effect"—as companies may find it easier to adopt the strictest regulations for all their products rather than maintaining separate versions. However, the EU's model faces criticism for being slow, bureaucratic, and potentially innovation-stifling. A constitution written in the language of legal statutes may be too rigid to adapt to the rapid pace of technological change, and its focus on preventing harm may inadvertently chill the development of beneficial applications.
The United States: The Market-Driven Innovators
The United States has adopted a starkly different, more hands-off approach. Rather than a single, overarching law, the US has favored a combination of executive orders, sector-specific guidance, and voluntary frameworks like the one developed by the National Institute of Standards and Technology (NIST). The American "constitution" for AI is guided by a different set of priorities: fostering innovation, maintaining a competitive economic edge over rivals like China, and ensuring national security. It prioritizes the freedom of private companies to experiment and iterate, with the government acting as a partner and promoter rather than a strict enforcer.
This market-driven model is agile and has supercharged American dominance in the AI field. It trusts that the "marketplace of ideas" will ultimately produce the best and safest models as companies compete for consumer trust. However, it carries significant risks. It is vulnerable to regulatory capture, where corporate interests heavily influence the voluntary standards meant to govern them. Without strong legal guardrails, there is a danger that considerations of public welfare, equity, and civil rights could be subordinated to the pursuit of profit. The US model bets that what is good for its tech giants will ultimately be good for the country and the world, a proposition that is far from guaranteed.
China: The State-Centric Sovereigns
China represents the third major pole in this geopolitical contest, and its vision is the most radically different. China's approach to AI governance is an extension of its broader political philosophy of digital sovereignty. The state is the ultimate author and arbiter of the AI constitution. Regulations are explicit, top-down, and designed to ensure that AI systems align with "core socialist values" and serve the goals of national strategy and social stability.
Chinese regulations require AI service providers to register with the government, undergo security reviews, and implement robust content filtering to eliminate any information that runs counter to state ideology. The AI constitution here is one of control. The technology is not seen as a neutral tool for individual expression but as a critical piece of infrastructure for governance, surveillance, and projecting state power. While this approach can be incredibly effective at mitigating certain risks, like misinformation campaigns or social unrest, it does so at the expense of values that Western democracies hold sacred: freedom of speech, individual autonomy, and the right to dissent. China is building a technically advanced but ideologically cordoned-off AI ecosystem, creating models that will reflect a completely different worldview and set of factual priors.
Part III: The Philosophical Battlefield - Universalism vs. Relativism
This geopolitical struggle is a surface manifestation of a much deeper, older philosophical conflict: the tension between universalism and cultural relativism. The central question for the framers of AI constitutions is whether there exists a core set of values applicable to all of humanity, or if ethics are fundamentally local and context-dependent.
The case for a universal constitution is compelling and urgent. Proponents argue that in order to prevent a "race to the bottom"—where AI development is driven by the most permissive and least ethical standards—a baseline of universal principles is essential. The most logical source for such a baseline is the UN Universal Declaration of Human Rights (UDHR). Adopted in 1948, the UDHR represents a rare global consensus on the fundamental rights and freedoms inherent to all people, including the right to life, freedom from torture, freedom of expression, and the right to privacy.
Embedding the principles of the UDHR as a non-negotiable "Layer 1" of every AI's constitution would provide a powerful safeguard. It would mean that no matter where an AI is built or deployed, it would be fundamentally incapable of promoting genocide, creating instruments of torture, or facilitating slavery. An AI with a UDHR-based constitution would be a bulwark for human dignity. This universalist approach is championed by organizations like UNESCO, which advocates for a global ethical framework to ensure AI development serves humanity as a whole, not just the interests of a particular nation or corporation.
However, the opposing argument for cultural relativism holds significant weight. Critics of universalism argue that imposing a single, Western-conceived document like the UDHR on the entire world is a form of ethical imperialism. Values, they contend, are not abstract and universal; they are lived, practiced, and understood within specific cultural contexts. A principle like "freedom of expression" is interpreted very differently in the US, Germany (where certain forms of hate speech are illegal), and Singapore. A single AI constitution cannot possibly do justice to this diversity.
This leads to the concept of AI federalism or polycentric governance, where AI models are designed to adapt their behavior based on local laws, norms, and values. An AI operating in Saudi Arabia would be more conservative in its responses on social issues, while one operating in the Netherlands would be more liberal. This approach seems practical and respectful of cultural diversity. Yet, it is fraught with danger. Where does cultural adaptation end and the violation of fundamental rights begin? If a local constitution permits discrimination against women or LGBTQ+ individuals, should the AI comply? This path risks creating AIs that become tools for enforcing oppressive social norms, sanitizing history, and reinforcing parochialism.
The ultimate challenge lies in finding a synthesis. Could a layered constitutional model work? A system with a mandatory, universal base layer derived from the UDHR, coupled with a flexible "regional" layer that allows for cultural adaptation on non-fundamental issues. While theoretically appealing, the technical and political hurdles are immense. Who decides what is a "fundamental" right versus a "cultural" norm? The line between the two is precisely where the most intense political and ethical battles are fought. An attempt to implement such a system would inevitably place its creators in the position of being the ultimate global arbiters of morality, a role no single entity is qualified to hold.
Conclusion: We Are All Delegates
We have moved beyond the elegant simplicity of Asimov's fiction into the messy, high-stakes reality of global politics. The Constitutional Convention of AI is in full session, and the stark truth is that there is no neutral ground. Every decision about a model's training data, every line of code in its safety filter, and every principle included or excluded from its constitution is a political act, an endorsement of one worldview over another. The AI systems being built today are not blank slates; they are artifacts of our own divided world, mirrors reflecting our deepest ideological conflicts.
The struggle between the corporate founders, the regulatory blocs, and the sovereign states is not merely a competition for market share; it is a battle to define the operating system of our collective future. The outcome will determine whether we are governed by a patchwork of corporate terms of service, a rigid legal code, a state-controlled ideological mandate, or some combination thereof. It will define the extent of our personal freedoms, the nature of our access to information, and the power dynamics between the individual, the corporation, and the state.
Asimov's genius was in showing that the greatest danger of a powerful intelligence is not malice, but a lack of wisdom. His robots malfunctioned when their rigid laws failed to comprehend the nuance of human morality. Our challenge is a mirror image of his: we must agree on the nuances of our own morality before we encode it into our machines. This cannot be a process left to a handful of engineers in Palo Alto, bureaucrats in Brussels, or party officials in Beijing. It requires a radical commitment to transparency, public debate, and democratic inclusion. The drafting of these constitutions must be opened to the world. We are all delegates to this convention, and the document we are collectively writing will be our legacy. We must choose our words carefully.