AI Regulations around the World: What to Watch Now

By Jim Meszaros

Countries and institutions are exploring AI regulation with a focus on addressing safety and privacy issues while encouraging innovation and investments to develop and commercialize AI applications. AI is no longer just an emerging technology evoking curiosity and experimentation, but a critical policy consideration for governments, businesses and society.

Here is a summary of major AI-related initiatives and actions in play in key markets around the world:


Efforts to regulate AI are seeking to balance a need to ensure the safety and security of citizens, address national security concerns and promote innovation, competition and U.S. leadership. Recent developments:

  • For most of the past year, Congress has been debating AI regulation. Numerous hearings have been held and bills introduced, but to date none have secured enough support for bipartisan passage in either chamber. Three Senate committees are expected to consider bills this year, but it is uncertain if any can be passed with a congressional calendar abridged by the November elections. However, the urgency of the need to regulate AI could accelerate passage of a national data privacy law first — efforts to enact such a law have been stalled in Congress for two years.
  • The Biden Administration is implementing a broad executive order that will govern how federal agencies use and manage AI, with agencies set to meet a series of deadlines throughout 2024. Industry leaders are adopting voluntary AI guiderails and standards to build public trust. Several states are considering their own AI regulations. The U.S. Government will likely increase spending on AI research and procurement, especially in defense and intelligence areas, using its purchasing power to shape the market.
  • Key issues to address:
  1. The use of AI misinformation and deepfakes in political campaigns and public debates.
  2. Protection of copywrites and online intellectual property.
  3. Preparing the U.S. workforce for AI’s impact across the economy.
  4. Ensuring fairness in areas such as financial services, healthcare, housing, education and employment.
  5. Promoting trust, transparency and competition across the AI ecosystem.

The most likely outcome for the U.S. in the near term is a bottom-up patchwork of executive branch, congressional and state actions that reinforce current laws governing targeted aspects of AI, rather than a comprehensive national law.


The EU’s AI Act was passed by the European Parliament by a margin of 523–46 on March 13, 2024, creating the world’s first comprehensive framework for AI regulation. The law is expected to enter into force between May and June after approval from the European Council. The EU will then roll out the new regulation in a phased approach through 2027, beginning in October 2024, with its major provisions enforceable starting in May 2026.

  • The EU AI Act is complex. It defines artificial intelligence, proposes consumer protections for users of AI applications, creates a tiered-risk framework, sets requirements and prohibitions for high-risk systems and establishes transparency rules for AI systems.
  • The bloc took a risk-based approach, designating four different risk levels and strictly prohibiting AI practices that are considered high-risk and unacceptable, while encouraging responsible innovation in areas of lower risk. The prohibited practices include using subliminal techniques or exploiting vulnerabilities to materially distort human behavior, which has the potential to cause physical or psychological harm, particularly to vulnerable groups such as children or the elderly. The Act also prohibits social scoring systems, which rate individuals or groups based on social behavior and interactions.
  • Authorities that will supervise and enforce the regulations are being established at both the EU level and in member states. Implementation of the new law will be a challenge. Some EU businesses are saying it will jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the societal risks and challenges.
  • The AI Act’s extraterritorial reach means that U.S. and other foreign companies will be impacted if their AI systems are used by EU customers.


The UK has been reluctant to push for legal interventions in the development and rollout of AI models for fear that regulation might slow UK-led innovation and industry growth. It has instead relied on voluntary agreements with governments and companies. However, the UK’s Department of Science, Innovation & Technology has started drafting legislation to regulate AI models. It is not clear how any regulation will intersect with the UK’s AI Safety Institute which was created after the UK hosted the first global AI Safety Summit in November 2023 and already conducts safety tests of the most powerful AI models. There is not yet a timetable for legislation to be introduced and debated. A national election expected in the fall of 2024 may result in a Labour-led government, which may take a different view on regulation. The UK has agreed to do joint safety testing of models with the U.S. However, the UK does not officially have a policy preventing companies from releasing AI models that have not been evaluated for safety. Nor does it have the power to remove an existing model from the market if it violates safety standards or to fine a company for those violations.


There is no legal or regulatory framework in Canada specific to AI. While some regulations in specific areas, such as health and finance, apply to certain uses of AI, there is no overall approach to ensure AI systems address systemic risks during their design and deployment. In June 2022, the Government tabled the Artificial Intelligence & Data Act (AIDA), a framework towards a new regulatory system designed to guide AI innovation and encourage responsible AI adoption by Canadians and Canadian businesses. The bill remains at the committee stage.


India has begun a six-week national election, with the expected outcome to be a third term for Prime Minister Modi and a BJP-led majority in the Congress. The use of AI in health care, education and law enforcement is already underway, with India seeking to use its AI expertise to drive economic growth and inward investment. India is also seeking to align its AI governance with global trends, particularly inspired by the European Union’s AI Act. India took its initial, cautious steps toward regulating AI in March 2024 through two advisories issued by the Ministry of Electronics & Information Technology — the government body responsible for internet-related policymaking. The Ministry advised that untested or unreliable AI models, large language models or generative AI systems would require explicit permission before being deployed to users on the “Indian Internet.” Two weeks later, a second advisory emphasized that platforms and intermediaries should ensure that AI models, large language models or generative AI software or algorithms by end users does not facilitate any unlawful content stipulated under Rule 3(1)(b) of India’s IT Rules, in addition to any other laws — thus applying to almost all stakeholders in India’s the AI ecosystem.


China has the most comprehensive AI regulatory regime to date, developed through three main regulatory policies that target algorithmic systems, deep synthesis information services and Generative AI. These regulations openly express the government’s intent to uphold Chinese values and prioritize state stability. However, as China considers a more comprehensive AI law, there is a shift underway in how policymakers and tech leaders talk about AI regulation. China’s economy has slowed, and the country fears it is falling behind the U.S. in AI innovation and Europe in driving global standards. This is creating second-guessing about whether regulations are hurting China’s competitive edge, which in turn could worsen the economic downturn and its goal to be seen as a global tech leader. China’s Ministry of Science & Technology is developing a broader AI law, with participation of the Cyberspace Administration of China as the main responsible department for interim regulations on Generative AI. Given the potential societal impact of AI on China’s model of centralized governance, as well as China’s desire to shape the global AI economy, the outlook is for more domestic legislation.


Japan introduced AI to the G7 agenda when it served as chair and host during 2023. However, the country currently has no regulations that constrain the use of AI. Japan’s Ministry of Economy, Trade & Industry (METI) said in July 2021 that “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment” because they will face difficulties in keeping up with the speed and complexity of AI innovation. METI said a prescriptive, static and detailed regulation could stifle innovation and recommended the government respect voluntary efforts by companies on AI governance while providing non-binding guidance and support. However, Japan’s ruling Liberal Democratic Party is expected to propose that the government introduce a new law regulating Generative AI technologies before the end of 2024.


In February, the Australian government named a panel of legal and scientific experts to advise on potential guardrails for the research, development and use of AI — its latest step toward mandatory regulation. The government will consider new laws to regulate the use of AI in “high-risk” areas such as law enforcement, employment and self-driving vehicles.



In October 2023, the G7 countries reached an agreement to establish a set of International Guiding Principles on AI and a voluntary Code of Conduct for AI developers, pursuant to the Hiroshima AI Process. The principles intend to assist organizations developing, deploying and using advanced AI systems in promoting the safety and trustworthiness of the technology. The Code of Conduct is intended to provide details and voluntary guidance for organizations developing AI, including complex generative AI systems. These include measures related to data quality, bias control, technical safeguards, etc. The code will be reviewed periodically through stakeholder consultations to ensure the measures remain relevant and future-proof.

Bletchley Declaration

The Bletchley Declaration on AI Safety was agreed to in November 2023 at an international summit hosted by the UK Government. The declaration marks a significant commitment to ensuring the ethical and responsible development and deployment of AI technologies. It includes voluntary provisions on ethical uses; collaboration across industry, academia, researchers, NGOs and policymakers; a commitment to diversity and inclusion and public engagement on social and safety issues. France will host the next global AI Summit in November 2024. France plans to use the summit to propose creating (and headquartering) a World AI Organization to define AI standards and audit processes.

United Nations

In March 2024, the UN General Assembly unanimously endorsed a global resolution on artificial intelligence. This resolution encourages all nations to prioritize human rights, personal data protection and the assessment of AI-related risks. It was initiated by the U.S. and received backing from China and an additional 120 countries. It was passed by consensus, securing the endorsement of all 193 UN member states. Although the resolution is non-binding and lacks an enforcement mechanism for non-compliance, it represents a move towards establishing a global framework for AI governance.

Jim Meszaros for TL content


Washington DC | International consultant to governments, multinational corporations and foundations on global economic, trade, development and climate issues.

Subscribe to our newsletter

Let's connect! Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.