AI Policy Pulse
September Edition

Welcome to the September edition of the AI Policy Pulse newsletter.

AI innovation and increasing public debate around both the opportunities and dangers of this technology is driving regulatory action around the world. Regulators around the world are racing to write the definitive rules for AI. The drive to regulate AI tools reflects the complexity of the task and the importance of striking the right balance between innovation and protecting individual rights. Without global standards, a patchwork of national laws could create compliance burdens for multinationals. Here’s a snapshot of what’s in play now around the world.


To date, the U.S. approach has been slow and incremental, seeking to simultaneously preserve civil rights and consumer protections for Americans, ensure U.S. global leadership in AI innovation, and mobilize international collaboration where possible to uphold democratic values and common standards.

  • Washington has so far let the industry self-regulate. But the Biden administration and Congress are both engaged in a broad review of AI to determine what elements need new regulation and what can be covered by existing laws. Numerous proposals are in play to protect data privacy, disclose the use of generative AI in political ads, Section 230 immunity for AI content, watermarking AI content, prohibit the use of AI to create child sex-abuse images, ensure copyrights are honored, protect workers from employment displacement, require self-tests and certifications, and much more. There is also a debate whether the government should concentrate oversight and enforcement in a “super regulator” or through distributed authority across several agencies. Congress is trying to align around a bipartisan AI legislative framework by the end of 2023. National security will also be a major legislative concern, with calls for an expansion of bans on exporting high-end semiconductors – the ones most capable of enabling AI.
  • The White House is expected to publish an executive order governing how federal agencies can use AI. Several agencies are inviting public comments toward regulations in areas where they have jurisdiction. The Department of Homeland Security has announced new policies governing its use of AI, including the testing, use and oversight of facial recognition technologies to prevent the agency from using AI for mass surveillance.


A high-level AI advisory body is being formed to review AI governance arrangements and advance recommendations. The UN Security Council has discussed military and non-military applications of AI and implications for global peace and security. UNESCO recently issued a set of guidelines for governments to regulate Generative AI and establish policy frameworks for its ethical use in education and research, including setting an age limit of 13 for the use of AI tools in classrooms and to advocate for teacher trainings.


Canada’s proposed AI and Data Act (AIDA) is intended to protect Canadians from high-risk systems, ensure the development of responsible AI, and position Canadian firms for adoption in global AI development. The AIDA would ensure high-impact AI systems meet existing safety and human rights expectations, prohibit reckless and malicious uses of AI, empower the Minister of Innovation, Science & Industry to enforce the act. Canada has also issued a Directive on Automated Decision-Making, which imposes several requirements on the federal government’s use of automated decision-making systems.


Brazil is considering a comprehensive AI bill, which emphasizes human rights and creates a civil liability regime for AI developers. The AI bill would prohibit certain “excessive risk” systems, establish a regulatory body to enforce the law, create civil liability for AI providers, require reporting obligations for significant security incidents and guarantee various individual rights, such as explanation of AI-based decisions, nondiscrimination, rectification of identified biases and due process mechanisms.


In March, the UK government set out its proposed “pro-innovation approach to AI regulation” in the form of a white paper, outlining five principles to frame regulatory activity and guide future development of AI models and tools and their use. However, there are now calls for the UK Parliament to consider legislation to regulate AI or risk falling behind the European Union and United States in setting standards. The UK government will host an AI summit in November in London to discuss international coordination on regulation to mitigate risk.


In June, the European Parliament voted to approve the AI Act. The Act still needs to be approved by the European Council, a process that could be concluded by the end of this year. Brussels wants its AI Act to form the basis of measures adopted by other leading democracies.

  • The Act categorizes AI in three ways: as unacceptable, high and limited risk. AI systems deemed unacceptable are those which are considered a “threat” to society and those would be banned. High-risk AI would need to be approved by European officials before going to market and throughout the product’s life cycle. These include AI products that relate to law enforcement, border management and employment screening, among others. Systems deemed to be a limited risk must be appropriately labeled to users to make informed decisions about their interactions with the AI. Otherwise, these products will mostly avoid regulatory scrutiny.
  • Once enacted, there is likely to be a two-year period before the AI Act comes into force and all organizations need to be in compliance. The Act provides for noncompliance fines of up to the greater of €30 million or 6% of total worldwide annual turnover.
  • Some EU member states have national AI strategies, many of which emphasize research, training and labor preparedness, as well as multistakeholder and international collaboration.


The National Assembly has approved the use of AI video surveillance during the 2024 Paris Olympics despite warnings from civil rights groups.


Germany plans to double public funding for AI research to nearly €1 billion over the next two years, including 150 new AI research labs, expanded data centers and accessible public datasets. Germany wants to ensure that AI research transitions into practical business applications for its automotive, chemical and industrial base to compete against the U.S. and China.


Israel is working on AI regulations designed to achieve a balance between innovation and adherence to human rights and civic safeguards. Israel has published a 115-page draft AI policy and is seeking public feedback ahead of a final decision. Priorities are to create a uniform risk management tool, establish a governmental knowledge and coordination center, and maintain involvement in international regulation and standardization.


The UAE’s Council for AI and Blockchain will oversee policies to promote an AI-friendly ecosystem, advance AI research, and accelerate collaboration between public and private sectors. The UAE is seeking to be a hub for AI research and a regional host for start-up firms, with a focus on collaboration, innovation and education in priority sectors such as energy and transportation. The UAE established an AI & Digital Economy ministry in 2017.


The Indian government’s stance on AI governance has recently shifted from non-intervention to actively advocating a regulatory framework. Prime Minister Modi has called for a worldwide framework to guarantee the ethical utilization of AI. The Indian government is advocating for a robust, citizen-centric and inclusive “AI for all” environment and is considering ethical, legal and societal issues related to AI, and whether to establish an AI regulatory authority. At the recent G20 Summit in India, world leaders discussed both the economic potential of AI and the importance of safeguarding human rights, with some leaders advocating for global oversight of the technology.


China issued a set of temporary measures effective from August 15 to manage the generative AI industry. The regulations impose obligations on service providers, technical supporters and users, as well as other entities such as online platforms. They ultimately aim to address risks related to AI-generated content and protect national and social security. The regulations also apply to foreign companies.

  • All AI providers must submit security assessments and receive clearance before releasing mass-market AI products. AI companies must first obtain a license to operate in China. Once a license is secured, providers must perform periodic security assessments of their platforms, register algorithms that can influence public opinion and confirm that user information is secure. They are also mandated to take a strict stance on “illegal” content by stopping generation of it, improving the algorithm, and reporting any illegality to state agencies. AI providers must protect China’s national security by abiding by the country’s “core values of socialism.”
  • A handful of Chinese tech companies, including Baidu and ByteDance, have received approval and launched generative AI products to the public. The broad scope, coverage and expansive compliance obligations of the regulations will require entities engaged in AI-related business, especially service providers and technical supporters, to perform self-evaluations to assess compliance.


Japan expects to introduce AI regulations by the end of 2023 with the goal of balancing civil protections with promoting economic growth and solidifying the country’s position as a leader in advanced semiconductors. Japan, as the G7 chair in 2023, is leading an effort by the G7 member states to align around a common, risk-based AI regulatory framework before the end of the year.


South Korea has numerous policy initiatives regarding AI and technology under its National Strategy for AI and is developing a comprehensive AI Act to ensure accessibility to AI technology for all developers. South Korea is also setting new standards on copyrights of AI-generated content.


There is no comprehensive AI regulation, but Singapore has developed voluntary governance frameworks and initiatives for ethical AI deployment and data management. Singapore is seeking to be a global hub for AI to generate economic growth and improve lives. Singapore has created the world’s first AI testing toolkit called AI Verify, allowing users to conduct technical tests on their AI models. A key tenet in Singapore’s AI policy is that citizens understand AI technologies and its workforce attains the skills necessary to participate in a global AI economy.

For more information, contact our team:

James Meszaros (Washington, D.C.)

Oliver Drewes (Brussels)

Ella Fallows (London)


Subscribe to our newsletter

Let's connect! Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.