AI Policy Pulse
January Edition

Welcome to the first 2024 edition of the AI Policy Pulse, with an overview of recent events from Brussels, Washington and Davos.


In December 2023, the EU reached a political agreement on the AI Act. But since the text became public, the regulation has faced serious criticism, casting doubt over how quickly it will be adopted. Among Member States, Germany, France and Italy have all refused to rule out rejecting the text. President Macron of France has threatened to block its adoption, seeking softer rules for foundation models to protect France’s national AI champions.

Despite the opposition, the European Council has pushed through with technical talks and a final version of the text is with the Member States for approval. The European Parliament elections are scheduled for June 6-9; the prospect of a more polarized Parliament in the next term creates an urgency to finalize the AI Act quickly. This means a comprehensive reopening of negotiations is unlikely.

Highlights of the law 

  • A tiered approach is set up to differentiate foundation models with systemic risks (such as powerful LLM models like GPT-4) and general-purpose AI systems that are built on top of foundation models. The latter includes AI applications able to generate new content using audio, code, images, text and videos.
  • Those who deploy general-purpose AI models will face transparency requirements such as technical documentation on their modelling and training, compliance with EU copyright law and dissemination of detailed summaries about training content.
  • Foundation models with systemic risks (also called high-impact foundation models) are placed in the higher tier and face a more rigorous framework to address their systemic risks. Within this classification are models that are trained with computing power above 10^25 floating point operations.
  • High-risk AI systems – those that are considered to place significant risk to health, safety, fundamental rights, environment, democracy and the rule of law – face stringent obligations. Systems that are deployed in sensitive areas such as education, employment, critical infrastructure, insurance and banking sectors, public services, elections, law enforcement, border control and administration of justice would receive the “high-risk” classifications.
  • Law enforcement’s use of biometric identification systems would be restricted. Exceptions exist for targeted searches related to serious crimes and would be subject to judicial authorization. Real-time use would be limited to specific situations, like preventing terrorism or locating individuals involved in specific crimes. Last-minute changes are being sought in the European Parliament over concerns with potential loopholes in the use of facial recognition by law enforcement.
  • The legislation prohibits certain applications of AI deemed threatening to citizens’ rights and democracy. This includes bans on social scoring, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and educational institutions and AI systems manipulating human behavior.

What happens next 

  • Talks are ongoing to find a consensus in the European Council, with Member States set to vote on the text on February 2. It will then need to be approved by the Parliament, first in committees on February 13, then by the full chamber – by April at the latest.
  • Once formally adopted, the law will be published in the EU’s Official Journal and enter into force 20 days later.
  • AI developers and deployers will then have six months to ensure compliance with rules on prohibited AI use cases. After 12 months, obligations on foundation models, including transparency reports and risk assessments, will become binding.
  • Non-compliance carries fines ranging from €7.5 million, or 1.5% of turnover, to €35 million, or 7% of global turnover, depending on the infringement and company size.
  • With a regulation nearly in place, the EU’s focus will now turn to innovation measures. Speaking at Davos in January 2024, Commission President Von der Leyen dispelled claims the EU is overly focused on regulation, announcing a plan to open European supercomputer capacity to “ethical and responsible” AI start-ups and provide them with high-quality datasets. The EU’s strategy will also look to strengthen Europe’s AI talent pool.

The EU set a clear ambition not only to regulate the EU’s AI market but also to set a global standard.

  • With first mover advantage, the EU will urge other jurisdictions to use the AI Act as a template for their own legislation. This would eventually give European companies a competitive advantage. However, many observers (including in the EU) argue the EU’s approach will instead overburden a nascent domestic industry.


The Biden Administration will focus on implementing its AI Executive Order, with different federal agencies publishing AI-related regulations.

  • Several agencies, including the Departments of Commerce and Labor, the National Science Foundation, Federal Trade Commission, Office of Management & Budget and the General Services Administration, are focused on meeting early deadlines in the Order to build up the federal AI workforce, fund government AI projects and set AI rules across the federal government. Other actions are related to AI safety and security; promoting innovation and competition and advancing equality and civil rights protections.
  • The Order takes a more limited approach to regulation of private industry. However, the Department of Commerce will establish detailed reporting requirements and companies developing certain types of AI models will have to adhere to those requirements.

Congress will take up AI regulation with bills already introduced and more to come.

  • At the end of 2023, Senate Majority Leader Chuck Schumer (D-NY) and fellow members of the “Gang of Four” – Senators Todd Young (R-IN), Martin Heinrich (D-NM) and Mike Rounds (R-SD) – concluded their AI Insight Forums with industry and societal leaders. Majority Leader Schumer says senators are now beginning to work on AI legislation. He has asked Senate committee chairs to introduce legislation on topics within their jurisdiction and begin the process of shaping bills through their respective committees.
  • The House of Representatives is actively playing a more prominent role in the AI regulatory process. House committees have held numerous hearings, led by House Committee on Energy & Commerce Chair Cathy McMorris Rodgers (R-WA). Chair Rodgers is focused on the intersection between data privacy and AI and is working to update the bicameral, bipartisan consumer data privacy bill from the 117th Congress. The Committee on Financial Services has formed an AI working group to examine how AI may affect the financial services and housing sectors.

States are not waiting on Washington and considering AI laws on their own.

  • Several state legislatures are considering AI rules and regulations as they begin their 2024 legislative sessions, including California, New York, New Jersey, Pennsylvania, Florida and Colorado.
  • For businesses, the importance of large states – such as California, Florida and New York – acting on AI cannot be understated. The emergence of potential and different bills creates a scenario in which organizations would be required to comply with each.
  • Other states are likely to adopt provisions from first-moving bills that become law, creating a patchwork of regulatory mandates and compliance burdens across the country.

AI patent issues take center stage.

  • The administration, Congress and the courts are all struggling with the question of what AI-related ideas, content and applications can be patented, with the issue becoming more urgent as AI advances across many industries.
  • The U.S. Patent & Trademark Office is currently soliciting public comments on AI and inventorship. Congress is also considering legislation on AI patents but will be weighed down by the heightened political environment of an election year. So, it may be left to the courts to set precedents as it hears and rules on AI patent and copyright cases already being adjudicated.


AI was one of the most discussed topics at this year’s World Economic Forum meeting. Some highlights:

  • Heads of state, tech leaders and CEOs were aligned in their criticisms as they warned AI might supercharge misinformation, displace jobs and deepen the economic gap between wealthy and poor nations. There were also discussions focused on the potential of AI to accelerate scientific discovery that would lead to innovations in healthcare, education, climate solutions, business productivity and advanced manufacturing.
  • An International Monetary Fund (IMF) report said the technology is likely to worsen inequality and elevate social tensions. The IMF said up to 60% of jobs in advanced economies are at risk due to AI – an alarming statistic that sparked discussions in Davos about the future of work and the urgency of skills training and adaptation, particularly among young workers.
  • There were consistent calls, from both the public and private sectors, for regulation and an equitable distribution of AI’s benefits. Some tech leaders called for a global approach to regulation, but one with a “light touch” to enable innovation and development. AI companies positioned themselves as responsible partners, making the case they have learned lessons from the missteps of social media companies and are working to prevent political extremism, racism and hate speech.
  • Leaders in India, Latin America and other parts of the Global South see the technology as the key to unlocking economic prosperity. Rwanda said it would host an Africa AI Summit this year to harness its economic potential for the continent. Chinese Premier Li Qiang said that AI innovations should not be used to contain other countries – which was viewed as a reference to U.S. efforts to restrict tech exports to his country.

The AI discussions at Davos highlighted its immense potential to reshape economies and societies while also drawing attention to the crucial need for balanced governance, ethical considerations, and skills development.

For more information, contact our team:

James Meszaros (Washington, D.C.)

Oliver Drewes (Brussels)

Ella Fallows (London)


Subscribe to our newsletter

Let's connect! Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.