AI Policy Pulse
November Edition

Welcome to the November edition of the AI Policy Pulse newsletter.

Artificial intelligence (AI) regulation, collaboration and risk management actions are accelerating in the United States, United Kingdom and Europe. This edition of AI Policy Pulse provides an overview of recent events in Washington, D.C., London and Brussels.


President Biden signed a long-awaited Executive Order on October 30 designed to reduce the risks that AI poses to consumers, workers, minority groups and national security. The Order is primarily directed toward U.S. federal agencies. However, it includes requirements for private companies that are developing and testing AI systems.

AI testing requirements are the most significant provisions in the Order

  • Requires developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the government. To provide legal justification, the Order invokes the Defense Production Act, a law which allows the President to impose mandates on domestic industry to promote national security.
  • Directs federal agencies to set standards for AI testing and address related chemical, biological, radiological, nuclear and cybersecurity risks.
  • The National Institute of Standards and Technology (NIST) will develop standards for AI “red-teaming” – or stress-testing the defenses and potential problems within systems. 
  • The way that the Order defines AI systems that will require testing leaves out every model currently available to the public; it is exclusively targeted at larger, more capable models that may come online in the future.

Several federal agencies are tasked with specific AI actions

The Department of Commerce will develop guidance for content authentication and watermarking for labeling items generated by AI, to make sure government communications are clear. The Departments of Energy and Homeland Security will study the threat of AI to critical infrastructure. The Federal Trade Commission will develop a set of antitrust rules to ensure competition and eradicate anti-competitive behavior in the marketplace. The Labor Department will focus on preventing workplace discrimination and bias. The Treasury Department will provide guidance on AI cybersecurity risks for financial institutions. The Department of Justice and federal civil rights offices will address algorithmic discrimination to prevent civil rights violations and develop best practices of the use of AI across the criminal justice system. Homeland Security will harness the expertise of highly skilled immigrants and non-immigrants with expertise in AI to stay, study and work in the U.S. through changes in visa processes. Health & Human Services will work with other agencies to develop an AI plan in areas such as research and discovery, drug and device safety and public health. The State Department will work with international partners to align and implement AI standards around the world.

What companies and contractors should watch

  • Because the White House cannot write new laws for private companies, it is directing different federal agencies to develop, publish and then enforce rules to carry out the provisions in the Order. These will begin to be released as soon as next month and throughout 2024.
  • The Order sends a strong signal to companies to take employee rights into account as they introduce AI into their workplace because regulatory action and compliance will be coming.
  • The U.S. Government is the largest technology purchaser in the economy. As such, the government’s purchasing requirements often drive industry standards. Federal agencies will be passing on AI provisions in the Order to their contractors, who can expect new contract terms and regulatory clauses.
  • As federal agencies issue regulations and guidance, companies will need to understand their new compliance obligations.

What happens next

  • The deadlines for implementation of various provisions in the Order span between 30 and 365 days.
  • The Office of Management and Budget (OMB) has 150 days to issue final guidance to federal agencies on implementation of the Order. OMB is currently seeking comment on its draft policy for implementing guidance. 
  • Every federal agency will designate a Chief AI Officer within 60 days. An interagency AI Council will coordinate federal action.
  • The Order does not provide any federal funds, nor does is set funding goals or explain how agencies are to prioritize AI within their broader missions and current budgets.
  • The Order calls on Congress to act. It specifically asks Congress to pass a national data privacy bill. While Congress will continue its push towards both a privacy law and comprehensive AI legislation, the timing for passage of legislation remains uncertain. Congress may act to codify into law some elements of the Order.
  • The Order’s administrative actions will take time for federal agencies to develop and come into force, will be subject to judicial review and legal challenges and can be revoked by a future administration. 


Prime Minister Rishi Sunak 2-day AI Summit concluded with 27 countries and the European Union signing the Bletchley Park Declaration on AI which focuses on so-called “frontier AI” and aims to ensure “an internationally inclusive network of scientific research on frontier AI safety.” 

The signing of a shared declaration indicating an international commitment to work collaboratively to address AI risks was cited as a diplomatic achievement for the Prime Minister, who invested his political capital to convene global leaders, tech executives, academics and civil society groups.

The Prime Minister hailed the fact that the United Kingdom and China both signed the communique as a “sign of good progress,” and said it vindicated his decision to invite China. China’s vice minister of science and technology said his country was willing to work with all sides on AI governance. “Countries regardless of their size and scale have equal rights to develop and use AI,” Minister Wu Zhaohui told delegates.

Other highlights

  • A smaller group of eight countries and technology companies reached an agreement where the “like-minded governments” of the United Kingdom, United States, Canada, France, Italy, Germany, Japan, South Korea and the EU would be able to test leading companies’ AI models before they are released.

  • The participating countries also agreed to form an international panel in the style of the Intergovernmental Panel on Climate Change (IPCC) to monitor and address AI risks. This was met with some resistance from Canada and France who are keen to promote existing international groupings.

  • The Prime Minister also announced a soon-to-be outlined U.S.-U.K. Safety Partnership, to “create guidelines, standards and best practices for evaluating and mitigating the full spectrum of risks,” and the establishment of a UK AI Safety Institute.

  • One of the key themes emerging from the Summit is the need to focus on short-term threats rather than long-term existential challenges. Another insight is that both sides – governments and companies – want the same outcomes on AI: clear guidance on regulation, but rules that do not stifle innovation. That approach has led to criticism from civil society groups about the Summit’s lack of focus. Groups including the Ada Lovelace Institute claim that the absence of an agreement on enforcing laws and regulations risks undermine the impact of any discussion on safety.

  • AI will now have its own event on the global summit calendar. Delegates agreed that the South Korea will co-host a mini-virtual summit on AI within the next six months and France will host the next in-person summit a year from now, ensuring the legacy of the UK’s inaugural event.

While the UK Summit is a start to international cooperation, a global agreement for overseeing the technology remains a long way off. Disagreements remain over how that should happen – and who will lead such efforts.


The AI Act is entering its last mile. EU policymakers met on October 25 for another round of political negotiations. They managed to finalize a key point on the classification of high-risk AI with the introduction of a filtering system. AI systems not categorized as high-risk are those performing narrow procedural tasks, detecting patterns, not influencing critical decisions (e.g., loan approvals or job offers), or aiming to enhance work quality.

What happens next

  • The European Commission still needs to determine a list of practical examples of high-risk use cases. Technical sticking points remain on foundation models, general-purpose AI and AI use in law enforcement.

  • Negotiators want to finalize a “package deal” on December 6 that would address compromises to proposed bans of high-risk AI systems, law enforcement exceptions, the fundamental rights impact assessment and sustainability provisions. 

  • A failure to reach a full agreement on these issues could push the negotiations into early 2024, increasing the risk of a delay ahead of the June 2024 election for European Parliament representatives.

  • There will likely be a two-year period before the Act’s enforcement date, by which time all companies and organizations will be expected to be compliant.

The EU also welcomed the G7 International Guiding Principles and voluntary Code of Conduct for AI developers agreed to on October 30, viewing it as complementary to the EU AI Act, and called on AI developers to sign and implement the Code of Conduct as soon as possible. The G7 principles and code of conduct contains a set of rules that AI developers are encouraged to follow, on a voluntary basis, to mitigate risks throughout the AI lifecycle.

Commission President Ursula von der Leyen attended the UK’s AI Safety Summit and signed the Bletchley Declaration. In a meeting with Prime Minister Sunak, von der Leyen said the European AI Office – which will be set up under the AI Act – should have a global vocation and cooperate with the AI Safety Institutes announced by the United States and United Kingdom. Looking to build on the UK Summit, Italian Prime Minister Giorgia Meloni unveiled plans to hold an international conference focused on AI and its impact on labor markets during the Italian G7 Presidency next year.

While the Commission has not officially commented on the White House’s AI Executive Order, several Members of the European Parliament welcomed it as a good step forward and noted the convergence on mitigating risks of foundation models. Some Parliamentarians are worried the EU risks falling behind in the global discourse on AI regulation by getting stuck in “regulatory overkill.”

For more information, contact our team:

James Meszaros (Washington, D.C.)

Oliver Drewes (Brussels)

Ella Fallows (London)


Subscribe to our newsletter

Let's connect! Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.