Global Approaches to AI Regulation: A Comparative Guideline
The rapid advancement of artificial intelligence (AI) is transforming society, the economy, and ethical frameworks, generating novel opportunities while simultaneously presenting potential challenges in the domains of security, privacy, and human rights. Regulation of AI is imperative to achieve a balance between innovation and the safeguarding of public interests, thereby ensuring the responsible deployment of technologies on a global scale.
International Approaches to AI Regulation
International organizations, such as the OECD and UNESCO, have formulated key recommendations that serve as the foundational framework for national approaches to AI regulation. The OECD updated its AI Principles in 2024, emphasizing values such as inclusive growth, transparency, and accountability, with the objective of providing adaptable guidance to member states. UNESCO, in its 2021 Recommendations on the Ethics of AI, underscores the importance of global collaboration to prevent regulatory fragmentation, incorporating ethical considerations into educational and technological strategies. These frameworks influence policies in the EU, the USA, and the United Kingdom, fostering the harmonization of standards within the context of international law.
AI Regulation in the European Union
The European Union has adopted the AI Act as the first comprehensive legislative instrument for AI regulation, classifying systems according to risk levels, namely unacceptable risk, high risk, limited risk, and minimal risk.
Systems posing unacceptable risks, such as social scoring or manipulative practices, are prohibited effective from February 2025, whereas high-risk systems (in fields such as medicine, law enforcement, or education) are subject to stringent requirements concerning transparency, safety, and conformity assessment.
Key stakeholders, including providers, developers, and users, bear clearly defined obligations, with an emphasis on the protection of fundamental human rights in accordance with the EU Charter of Fundamental Rights.
In July 2025, the European Commission issued guidelines on General-Purpose AI models (GPAI), which entered into force in August 2025, enhancing transparency and accountability. This relatively stringent legislative approach positions the EU as a global leader in ethical AI regulation, albeit sparking debates regarding its potential to impede innovation.
AI Regulation in the United States
In the United States, AI regulation is predicated on America’s AI Action Plan, issued by the White House in July 2025 under the Trump administration, which prioritizes market freedoms, the elimination of excessive regulations, and the export of technologies.
This plan replaced the prior Executive Order (EO) 14110 of President Biden’s administration from October 2023, which focused on safety, ethics, privacy, and human rights protection, but was rescinded by President Trump in January 2025 due to its perceived role as a barrier to innovation.
The plan encompasses over 90 federal actions across three pillars: accelerating innovation, building AI infrastructure, and asserting global leadership, with an emphasis on collaboration with the private sector and the reduction of barriers to business.
Currently, America’s AI Action Plan constitutes a strategic document that has not been fully implemented, necessitating ongoing monitoring of its execution and potential legislative amendments.
In contrast to the EU, the USA eschews a single comprehensive statute, favoring recommendations and executive orders, such as EO 14277 and 14278 from April 2025.
This approach facilitates rapid AI deployment but is critiqued for potential deficiencies in ethical oversight and human rights protection.
AI Regulation in the United Kingdom
The United Kingdom adheres to a flexible, sector-specific approach to AI regulation, eschewing a single comprehensive statute. This approach is grounded in the “A pro-innovation approach to AI regulation” published by the government on 29 March 2023, which establishes five key principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
Building on these principles, the government introduced the AI Opportunities Action Plan on 13 January 2025, serving as a strategic roadmap for leveraging artificial intelligence as a driver of economic growth, with a focus on balancing innovation, safety, and public welfare, structured across three sections: laying the foundations (infrastructure, data, talent), applying AI to public missions, and developing sovereign capabilities.
Furthermore, this plan announces legislative amendments, including the enactment of the Artificial Intelligence (Regulation) Bill in 2025, which aims to establish an AI authority for oversight and regulation of AI risks, while concurrently promoting innovation in the United Kingdom.
The plan emphasizes a pro-innovation stance without stringent prohibitions, in contrast to the EU, ensuring adaptability to rapid technological advancements in AI.
Comparison of Key Aspects of Regulation
The approaches to AI regulation in the EU, USA, and United Kingdom exhibit significant divergences in balancing risks and innovation.
The EU employs rigorous legislation with explicit risk classification and stringent requirements, affording a high degree of human rights protection but potentially hindering expeditious technological development.
The USA, conversely, prioritizes market freedoms and minimal governmental intervention, stimulating innovation and global leadership, but risking inadequate ethical controls.
The United Kingdom occupies an intermediate position with a flexible sector-specific approach that integrates principles into existing regulatory frameworks, balancing innovation and safety, albeit requiring refinement to address systemic risks.
Japan adopts a “light-touch” approach to AI regulation, emphasizing the promotion of innovation and utilization of existing laws, as exemplified by the AI Promotion Act 2025, which facilitates rapid technological development with minimal constraints.
China implements a stringent centralized approach and regulates AI through the Interim Measures for the Management of Generative AI Services effective 15 August 2023, focusing on content and data control to ensure national security, but potentially curtailing freedom of innovation through the removal of “non-compliant” products.
Overall, the international recommendations of the OECD and UNESCO serve as a bridge for harmonization, yet national divergences underscore the need for global coordination.
Conclusion
The impact of regulatory divergences across jurisdictions is already evident in the territorial distribution of AI startups and leading platforms, which predominantly select jurisdictions with more flexible rules.
According to the Stanford AI Index Report 2025, in 2024, the highest number of newly funded AI startups was observed primarily in the USA, followed by the United Kingdom and China. Similarly, the top-5 largest AI platforms by market capitalization in 2025 are predominantly located in the USA.
AI regulation in the international market remains fragmented. This state is attributable to challenges such as geopolitical disparities, rapid technological evolution, and conflicts of interest between rights protection, economic considerations, and security. Achieving full harmonization of global regulatory standards for AI will require time for international negotiations and adaptation to emerging technological challenges.
AI Horizon Conference
The AI Horizon Conference brought together entrepreneurs, investors and industry leaders in Lisbon to discuss key trends and shape the future of AI.