Rewiring Multilateralism: Governing AI in a Divided World
A new form of diplomacy is emerging. It is not forged in the fires of war, nor negotiated behind closed doors like traditional treaties. It is not just about summits or sanctions, tariffs or truces. It is shaped in codebases, compute clusters, and cascading regulatory frameworks. It is AI diplomacy—and it is quietly rewriting the rules of global influence and transforming how nations project power, negotiate norms, and engage with each other.
Yet amid this fast-moving global realignment, one thing is conspicuously absent: the institutions designed to keep power accountable and innovation inclusive. What’s missing is not just a rules-based order—but a shared infrastructure for responsible innovation, ethical alignment, and collaborative foresight. The scaffolding of international cooperation. The muscle memory of multilateralism—rewired for the age of AI.
And without it, we risk not just a race for technological dominance—but an AI arms race, unbound by shared norms, ethical guardrails, or global oversight.
The Great AI Powers, and the Governance Gap
The United States is treating AI as a cornerstone of national strategy—backing massive public and private investments, shaping regulatory conversations, and forging bilateral tech partnerships with key allies, including GCC countries. China is embedding AI deeply into its foreign policy playbook, leveraging its global supply chain dominance and advancing state-led innovation across sectors from surveillance to semiconductors. The European Union is pioneering the world’s first comprehensive AI legal framework through the AI Act, defining how algorithms interact with rights, risks, and regulation. The United Kingdom is positioning itself as a global AI hub, encouraging entrepreneurship and attracting top talent through flexible visa schemes and pro-innovation policies. India is advancing its “AI for All” initiative as part of its National AI Strategy, targeting the training of 1 million skilled AI workers by 2026 to democratize access and foster inclusive growth across sectors like agriculture and healthcare. The United Arab Emirates is investing heavily in AI infrastructure, including the Stargate AI data center—a $100 billion project in partnership with Microsoft and G42—to position itself as a Middle East hub for advanced computing and innovation. Meanwhile, the United Nations, though slower to act, has taken symbolic steps—establishing high-level advisory bodies and adopting landmark resolutions to foster multilateral dialogue and ethical norms for AI.
But beyond these national and regional moves—where is the collective framework?
Where is the Convention for AI systems used in warfare?
Where is the Agreement for algorithmic responsibility?
Where is the peacekeeping mandate for AI-driven cyber conflict?
Where is the global framework for safeguarding democratic processes in the age of synthetic media?
And where is the strategic vision that moves beyond managing risks and seizing opportunities—to ensure long-term equity, stability, and accountability?
It doesn’t exist at a global or binding level.
Instead, the international community faces a chaotic proliferation of governance-by-guideline: G7 communiqués, OECD principles, industry codes of conduct, and a ballooning number of voluntary AI ethics declarations—none of which are enforceable. These are not enough.
In 2023, global military investment in AI reached approximately $9.2 billion—yet no international law regulates its use in combat zones. AI is projected to contribute $15.7 trillion to global GDP by 2030, but 70% of that value is expected to concentrate in the U.S. and China, deepening global inequities. At the same time, tightened U.S.-China chip export bans have restricted access to advanced semiconductors, effectively locking countries like India and Nigeria out of critical compute infrastructure.
Emergency responders are increasingly forced to contend with AI-generated misinformation during crises. During Hurricanes Helene and Milton in 2024, false AI-created images and rumors about government aid spread widely, undermining public trust and delaying response efforts.
Meanwhile, cybersecurity threats are escalating. Generative AI has enabled the creation of highly sophisticated phishing campaigns, contributing to a 1,000% increase in phishing attacks between 2022 and 2024. In the second half of 2024 alone, phishing emails surged by 202%, with average breach costs exceeding $4.8 million per incident.
Addressing Resistance to Multilateralism
Efforts to establish binding global frameworks for AI governance often meet resistance—and not without reason. Major powers such as the United States and China continue to prioritize national security, economic competitiveness, and technological sovereignty. In this context, multilateral agreements are often viewed not as safeguards but as potential constraints—limiting the freedom to innovate, regulate, and compete on their own terms.
This resistance is not merely strategic; it is structural. The global AI landscape is increasingly defined by asymmetric capabilities and trust deficits. The United States, for example, has built deep alliances across innovation ecosystems and maintains control over key components of the AI pipeline—from cloud infrastructure to foundational models. China, in turn, is advancing a self-reliant AI agenda anchored in industrial policy and Global South partnerships. In both cases, AI governance is seen as an extension of geopolitical positioning, not a neutral policy domain.
Skeptics also point to past failures of multilateralism as cautionary tales. The World Trade Organization’s stalled Doha Development Round—launched in 2001 and paralyzed for over two decades—remains a stark example of how divergent economic priorities and power imbalances can derail consensus. The collapse of the Multilateral Agreement on Investment (MAI) in the late 1990s, and the widespread rejection of the Anti-Counterfeiting Trade Agreement (ACTA) in 2012, further illustrate how global frameworks can falter when perceived to favor corporate or dominant-state interests at the expense of sovereignty, equity, or digital rights. These precedents fuel concern that AI multilateralism could similarly devolve into gridlock or exclusion, undermining legitimacy and trust.
Yet even in times of competition, history offers pragmatic examples of coordination. During the Cold War, the U.S. and the Soviet Union—despite profound mistrust—negotiated arms control treaties not out of goodwill, but because the stakes became unmanageable. These agreements preserved core national interests while preventing catastrophic escalation.
AI presents a similar challenge: a rapidly evolving technology whose misuse could cross borders invisibly and instantly. While the dangers differ from nuclear risk, the logic of mutual restraint remains relevant. From algorithmic escalation in cyber operations to synthetic media manipulating public narratives, the threats are diffuse—but the fallout, potentially global.
To break this impasse, we need governance models that do not demand ideological alignment or full integration. Instead, they must offer flexible, incentive-based coordination: frameworks that uphold sovereignty while building accountability through transparency, shared safety standards, and practical benefits—such as access to compute, security cooperation, or investment in global public goods.
Multilateralism in the age of AI must be designed for complexity. It must recognize that trust can be built without unanimity, that risk-sharing can coexist with rivalry, and that cooperation—however limited—is better than fragmentation by default.
Governance Isn’t Too Slow—We’ve Just Treated It Like an Afterthought
A common argument in policy circles is that AI is advancing too rapidly to be governed. With technical performance doubling every 6 to 12 months, and global spending projected to surpass $640 billion by 2025, many fear that regulation can only trail behind—always reactive, never ready.
But this isn’t a speed problem. It’s a political one.
We’ve successfully governed high-stakes, fast-moving domains before. Nuclear proliferation was curbed through Cold War-era treaties, even as delivery systems rapidly evolved. Chemical weapons were banned outright in the 1990s despite their widespread tactical use. Global finance—volatile, digitized, and systemically risky—was reshaped through iterative frameworks like Basel III. And climate governance, while imperfect, yielded the Paris Agreement by accepting the need for adaptive, non-linear commitments.
So why should AI be the exception?
The real issue is that governance hasn’t kept pace—not because it can’t, but because it hasn’t been prioritized. Every delay normalizes a future in which powerful AI systems are released without scrutiny, oversight is patched in later, and global norms are written not by consensus, but by default—often by the first-movers.
This is already happening. In 2023, the UN adopted a resolution on “safe, secure, and trustworthy AI”—a milestone, but still non-binding. Meanwhile, models like Grok 4, with real-time multi-modal reasoning, and Midjourney V7, capable of hyper-realistic image generation, are redefining public interfaces and information integrity—faster than institutions can respond.
We don’t need to slow innovation. We need to speed up governance—and redesign it to be adaptive. Governance should have innovation as its foundation, not merely an add-on, because if innovation runs on a timeline, governance must too.
Innovation Outpaces Regulation: A Timeline
In 2016, AlphaGo defeated the world’s Go champion, sparking early ethical debates—yet no global guardrails were in place.
By 2020, GPT-3 launched, showcasing unprecedented language capabilities, while governance remained limited to soft guidance through OECD principles, with no enforcement mechanisms.
In 2023, ChatGPT scaled rapidly across the public domain, leading to a surge in voluntary ethics pledges—but regulation still lagged far behind.
2024 marked a symbolic milestone as the UN adopted its first global AI resolution, yet it remained non-binding, offering vision without teeth.
In 2025, Grok 4 introduced real-time, multi-modal reasoning, triggering urgent concerns about safety and privacy—still largely unregulated. That same year, Midjourney V7 normalized photo-realistic generative visuals, fueling a wave of deepfakes while policies remained debated and enforcement limited.
This timeline isn’t a crisis alert. It’s a blueprint. We’ve governed complexity before—and we can again. But only if we stop treating governance as a lagging response and start seeing it as a co-evolving force—critical to global trust, safety, and innovation alike.
Multilateralism doesn’t need to slow down AI. It needs to catch up—on purpose, and by design.
Rewiring Multilateralism for the AI Era
We don’t need to reinvent diplomacy—we need to rewire it for an AI-driven world. The foundations exist: the international community has long managed complex, high-stakes challenges—from nuclear arms control to climate governance. But AI poses a distinct mix of speed, dual-use complexity, and deep global asymmetry.
Without updated multilateral infrastructure, governance efforts will remain fragmented, aspirational, and skewed toward the interests of a few powerful actors. The stakes are clear: as of 2024, African nations access less than 2% of global GPU capacity, and the Global South holds just 15% of all AI patents. Meanwhile, emerging AI regulation across the world is outpacing global alignment, with interoperability and enforcement still unresolved.
What’s needed now is not just vision—but implementation. The following proposals build on proven multilateral mechanisms while adapting to AI’s disruptive potential:
1. A Global AI Framework Convention
Modeled after the UN Framework Convention on Climate Change (UNFCCC), this treaty would establish baseline global principles for ethical AI development, safety protocols, algorithmic transparency, and limits on military applications. Drawing from models like the EU AI Act and inspired by the Paris Agreement’s Nationally Determined Contributions, it could include peer reviews and iterative national strategies. To gain traction, the framework must balance innovation with accountability and allow flexible pathways for lower-income states to participate. Enforcement remains a key challenge—especially as 2025 marks the sharpest divergence yet in national AI laws—but such a convention would provide the institutional spine to harmonize ethical standards globally.
2. An AI Peace and Security Council
This proposed multilateral body—either housed within the UN or linked to regional blocs—would monitor risks related to autonomous weapons, disinformation warfare, and cyber escalation fueled by AI. Like the UN Security Council, it would issue early warnings, coordinate crisis responses, and recommend moratoriums when necessary. With military AI investments projected to triple by 2030, from $9.2 billion to $29 billion, this body is not a luxury—it’s a necessity. However, to avoid the political paralysis that sometimes stalls existing security mechanisms, it must be structured with inclusive representation and decision-making processes that prioritize urgency over rivalry.
3. A Global Data and Compute Compact
To address digital inequality, this compact would create norms for equitable access to chips, compute power, cloud infrastructure, and cross-border data flows. Rather than mandate tech transfers that risk political pushback, it would enable voluntary coordination: regional compute hubs, shared AI accelerators, and frameworks for data sovereignty tailored to local contexts. This compact responds directly to the growing concentration of compute resources in the U.S. and China—nations dominating a projected $4.8 trillion AI market. By fostering access without coercion, it would ensure that no region is permanently locked out of the AI age.
4. A Multilateral Fund for AI and Development
Jointly governed by countries from the Global South, this fund would finance the use of AI for public goods—climate adaptation, health equity, disaster response, agriculture, and inclusive education. Inspired by multilateral development banks and modeled after proposals like the Multilateral AI Research Institute, it would prioritize ethical innovation aligned with local needs. Public-private partnerships could strengthen the fund’s reach, but governance safeguards are essential to prevent imbalances and ensure that resources reach those powering AI systems yet often left behind in policy and investment flows.
This is not about idealism—it’s about institutional evolution.
Multilateralism must catch up with AI’s pace, scale, and asymmetry. These proposals build from existing diplomatic structures while addressing the new terrain AI presents. If implemented, they would not just manage risk—they would unlock the inclusive potential of this transformative technology.
AI Without Guardrails Becomes AI Brinkmanship
The future doesn’t require rules rewritten from scratch—it demands smarter ones, co-designed and shared across borders.
Without international guardrails, AI development risks devolving into a zero-sum race—a new kind of Cold War, not with missiles, but with models. And the consequences won’t be hypothetical. They’ll be systemic, cascading across every domain of public and private life:
Jobs disrupted. Inequities widened. Surveillance normalized. Democracy strained. Crises amplified by misinformation too fast to fact-check.
The promise of AI remains immense. It can accelerate medical breakthroughs, enhance disaster response, drive climate resilience, and expand access to education. But without inclusive frameworks, that promise risks becoming concentrated power—unaccountable, uneven, and unsustainable.
We’ve faced global inflection points before. Arms control. Financial regulation. Pandemic preparedness. Climate action. Each time, meaningful progress came not from unilateral dominance, but from collective architecture—imperfect, evolving, but anchored in cooperation.
AI must be next.
Because in the absence of governance, instability takes root. And in a world connected by code, instability anywhere becomes risk everywhere.
Multilateralism may be slow. But it’s still our best chance at making AI not just powerful—but just.
The time for coordinated action is not someday. It’s now.
Ali Al Mokdad