Welcome to the September edition of the AI Policy Pulse newsletter.
AI innovation and increasing public debate around both the opportunities and dangers of this technology is driving regulatory action around the world. Regulators around the world are racing to write the definitive rules for AI. The drive to regulate AI tools reflects the complexity of the task and the importance of striking the right balance between innovation and protecting individual rights. Without global standards, a patchwork of national laws could create compliance burdens for multinationals. Here’s a snapshot of what’s in play now around the world.
To date, the U.S. approach has been slow and incremental, seeking to simultaneously preserve civil rights and consumer protections for Americans, ensure U.S. global leadership in AI innovation, and mobilize international collaboration where possible to uphold democratic values and common standards.
A high-level AI advisory body is being formed to review AI governance arrangements and advance recommendations. The UN Security Council has discussed military and non-military applications of AI and implications for global peace and security. UNESCO recently issued a set of guidelines for governments to regulate Generative AI and establish policy frameworks for its ethical use in education and research, including setting an age limit of 13 for the use of AI tools in classrooms and to advocate for teacher trainings.
Canada’s proposed AI and Data Act (AIDA) is intended to protect Canadians from high-risk systems, ensure the development of responsible AI, and position Canadian firms for adoption in global AI development. The AIDA would ensure high-impact AI systems meet existing safety and human rights expectations, prohibit reckless and malicious uses of AI, empower the Minister of Innovation, Science & Industry to enforce the act. Canada has also issued a Directive on Automated Decision-Making, which imposes several requirements on the federal government’s use of automated decision-making systems.
Brazil is considering a comprehensive AI bill, which emphasizes human rights and creates a civil liability regime for AI developers. The AI bill would prohibit certain “excessive risk” systems, establish a regulatory body to enforce the law, create civil liability for AI providers, require reporting obligations for significant security incidents and guarantee various individual rights, such as explanation of AI-based decisions, nondiscrimination, rectification of identified biases and due process mechanisms.
In March, the UK government set out its proposed “pro-innovation approach to AI regulation” in the form of a white paper, outlining five principles to frame regulatory activity and guide future development of AI models and tools and their use. However, there are now calls for the UK Parliament to consider legislation to regulate AI or risk falling behind the European Union and United States in setting standards. The UK government will host an AI summit in November in London to discuss international coordination on regulation to mitigate risk.
In June, the European Parliament voted to approve the AI Act. The Act still needs to be approved by the European Council, a process that could be concluded by the end of this year. Brussels wants its AI Act to form the basis of measures adopted by other leading democracies.
The National Assembly has approved the use of AI video surveillance during the 2024 Paris Olympics despite warnings from civil rights groups.
Germany plans to double public funding for AI research to nearly €1 billion over the next two years, including 150 new AI research labs, expanded data centers and accessible public datasets. Germany wants to ensure that AI research transitions into practical business applications for its automotive, chemical and industrial base to compete against the U.S. and China.
Israel is working on AI regulations designed to achieve a balance between innovation and adherence to human rights and civic safeguards. Israel has published a 115-page draft AI policy and is seeking public feedback ahead of a final decision. Priorities are to create a uniform risk management tool, establish a governmental knowledge and coordination center, and maintain involvement in international regulation and standardization.
The UAE’s Council for AI and Blockchain will oversee policies to promote an AI-friendly ecosystem, advance AI research, and accelerate collaboration between public and private sectors. The UAE is seeking to be a hub for AI research and a regional host for start-up firms, with a focus on collaboration, innovation and education in priority sectors such as energy and transportation. The UAE established an AI & Digital Economy ministry in 2017.
The Indian government’s stance on AI governance has recently shifted from non-intervention to actively advocating a regulatory framework. Prime Minister Modi has called for a worldwide framework to guarantee the ethical utilization of AI. The Indian government is advocating for a robust, citizen-centric and inclusive “AI for all” environment and is considering ethical, legal and societal issues related to AI, and whether to establish an AI regulatory authority. At the recent G20 Summit in India, world leaders discussed both the economic potential of AI and the importance of safeguarding human rights, with some leaders advocating for global oversight of the technology.
China issued a set of temporary measures effective from August 15 to manage the generative AI industry. The regulations impose obligations on service providers, technical supporters and users, as well as other entities such as online platforms. They ultimately aim to address risks related to AI-generated content and protect national and social security. The regulations also apply to foreign companies.
Japan expects to introduce AI regulations by the end of 2023 with the goal of balancing civil protections with promoting economic growth and solidifying the country’s position as a leader in advanced semiconductors. Japan, as the G7 chair in 2023, is leading an effort by the G7 member states to align around a common, risk-based AI regulatory framework before the end of the year.
South Korea has numerous policy initiatives regarding AI and technology under its National Strategy for AI and is developing a comprehensive AI Act to ensure accessibility to AI technology for all developers. South Korea is also setting new standards on copyrights of AI-generated content.
There is no comprehensive AI regulation, but Singapore has developed voluntary governance frameworks and initiatives for ethical AI deployment and data management. Singapore is seeking to be a global hub for AI to generate economic growth and improve lives. Singapore has created the world’s first AI testing toolkit called AI Verify, allowing users to conduct technical tests on their AI models. A key tenet in Singapore’s AI policy is that citizens understand AI technologies and its workforce attains the skills necessary to participate in a global AI economy.