AI Policy Pulse

Welcome to the inaugural edition of the AI Policy Pulse.

Artificial intelligence (AI) is accelerating exponentially and dramatically. So is an intense debate over whether this technology will save the world, destroy the world or both. Never has a new technology ignited such a robust, high-stakes discussion about the potential impact and consequences across entire industries, economies and societies.

The rapid advancement of generative AI tools is capturing urgent attention from regulators and legislators around the world. Political bodies and policymakers are accelerating efforts to manage the potential risks of AI, hold developers and users accountable for their systems and unlock growth and innovation.

In each timely edition of AI Policy Pulse, we’ll provide updates on what’s happening around the globe – including in the United States, European Union and United Kingdom – as AI policymaking races to keep up with these fast-moving capabilities.

UNITED STATES

State of play:

AI regulatory discussions are taking place across both the Biden Administration and Congress, with increasing urgency for the government to align around a comprehensive strategy that protects consumers and citizens while enabling U.S. companies to innovate and adopt AI technologies.

  • President Biden has met with industry representatives, academics and experts to discuss both positive and negative consequences of AI. The White House is meeting multiple times a week to develop ways for the federal government to ensure the safe use of AI. It remains uncertain how the White House might act on its own, such as through presidential Executive Orders providing direction to federal agencies.
  • Several federal agencies – including the Department of Commerce, Federal Trade Commission, Department of Justice, U.S. Copyright Office and the Consumer Financial Protection Bureau – are considering AI regulations in areas where they have jurisdiction.
  • Among the issues at play: [1] preventing AI systems from bias against individuals or groups, [2] promoting competition in the AI sector, [3] protecting consumers and citizens from harmful AI applications, [4] guarding against mis- and disinformation, [5] patent and copyright protections, [6] ensuring AI aligns with democratic values, [7] the impact of AI on national employment, [8] transparency issues and more. The future of Section 230 – which shields tech companies from liability for user-generated content – is also a major question in AI regulation.

Key influencers:

  • In Congress, Senate Majority Leader Chuck Schumer (D-NY) is leading the effort to enact bipartisan AI legislation. Schumer has unveiled his “SAFE Innovation Framework for AI Policy.” (SAFE is an acronym that stands for “security, accountability, protecting our foundations and explainability.”) However, he has provided no details to date as to what might be included in a bill.
  • While several AI bills have already been introduced in both chambers, none have amassed widespread support. The Senate will hold a series of “AI Insight Forums” this fall to hear from industry executives, academics, civil rights activists and other stakeholders, creating a path for comprehensive bipartisan action on AI policy.
  • Senators to watch in the AI legislative process include five committee leaders: Sens. Maria Cantwell (D-WA), Gary Peters (D-MI), Amy Klobuchar (D-MN), Mark Warner (D-VA) and Richard J. Durbin (D-IL) – the chairs of the commerce, homeland security, antitrust, intelligence and judiciary committees and subcommittees. 
  • Tech companies and their trade groups have gone on the offensive in Washington, some arguing for regulation that will prevent the technology from posing threats to society, while others are pushing for self-regulation and a light government regulatory burden to protect innovation.

What to watch next:

  • As the 2024 presidential and congressional campaigns heat up, there are increasing calls from political consultants, election advocates and lawmakers to set up guardrails to rein in AI-generated advertising and marketing. Political campaigns will see AI as a way to help reduce campaign costs by creating instant responses to fast-evolving events or attack ads. Others worry AI has the potential to spread disinformation and deepfakes to a wide audience. Democrats in both chambers of Congress have introduced a bill that would require a disclaimer on political ads that feature AI-generated images or videos. Republicans have not yet signaled support or opposition.
  • AI’s real impact on politics will likely be behind the scenes. It will be used by campaigns to improve fundraising targeting, provide campaigns with sophisticated, real-time voter data and analysis and create more personalized and targeted messages for voters.

EUROPEAN UNION

State of play:

The EU Artificial Intelligence Act – the world’s first regulation of AI technology – is on track to be adopted by the end of 2023. The rules would be applied from 2025/2026. All businesses will be subject to this law if their AI systems are used within the EU, regardless of their headquarters location. 

  • The act assigns risk levels to different AI uses. Organizations will need to comply with a range of new transparency and risk management obligations. There will be outright bans of certain uses like biometric surveillance and emotion recognition. The EU rules are more burdensome than policy shaping up in the United States or elsewhere to date – namely on generative AI models like ChatGPT.
  • In May, Internal Market Commissioner Thierry Breton launched a temporary and voluntary “AI Pact” intended for all EU and non-EU AI developers. The pact is meant to act as a stop-gap measure to encourage AI companies operating in the EU to start complying with the AI Act before it is legally in effect. Commissioner Breton is currently on a world tour to seek voluntary commitments from industry leaders.
  • The use of AI in the EU will also be impacted by the new AI Liability Directive (AILD) and the Product Liability Directive (PLD), which are in the process of being adopted. The AILD aims to complement the AI Act by harmonizing national procedural rules around liability and ensuring that victims of harm caused by AI enjoy the same standard of protection as victims of other types of harm. The PLD allows for compensation for damage caused by defective products – including those using AI – based on strict liability. It applies only to material losses due to loss of life, damage to health or property and data loss, and it is limited to claims by individuals.

Key influencers:

  • The two-year effort to pass the AI Act has been led by European Commissioners Margrethe Vestager and Thierry Breton and their teams. Breton’s focus is on convincing businesses to voluntarily comply before the regulation comes into force in 2025, while Vestager’s focus is driving global alignment through a voluntary code of conduct on AI through the G7. French President Macron has also weighed in on discussions, seeking a balance between driving innovation and protecting users. Competing with France for AI leadership in Europe is Spain, which launched the first sandbox to test compliance with the AI Act. 
  • In the European Parliament, the Internal Market and Consumer Protection (IMCO) and Civil Liberties and Justice (LIBE) Committees have led the process, making significant changes to the draft rules, namely the inclusion of generative AI within the scope. 
  • National Data Protection authorities across the EU have launched investigations into data used by AI companies for their training models and in some cases have temporarily banned ChatGPT. The European Data Protection Board has created an AI task force to ensure coordinated action and avoid fragmentation. 
  • Tech company efforts to influence AI policy in Brussels have been prominent, with high-profile visits from leading U.S. companies and a coalition of European businesses. Europe’s consumer organization and digital rights groups such as European Consumer Organisation (BEUC), European Digital Rights initiative (EDRi) and Access Now have been influential in the Parliament on issues related to surveillance and privacy.

What to watch next:

  • EU Commissioner Vestager has also announced a separate voluntary global “code of conduct for AI” as part of the outcomes of the EU-US Tech and Trade Council. Vestager’s plan focuses on international cooperation for an aligned AI rulebook, particularly on generative AI. It builds on the Joint Roadmap for Trustworthy AI and the G7’s Hiroshima AI process. Details of the code are expected to be published in the coming weeks.
  • AI systems have also come under scrutiny by Data Protection Authorities for not complying with the EU’s General Data Protection Regulation (GDPR). In April, the Italian data protection authority unblocked OpenAI’s ChatGPT after the company offered more information about its service to users about protecting their privacy and data protection rights. Authorities in France, Spain and the Netherlands have opened investigations.

UNITED KINGDOM

State of play:

The UK Government has sought to assert itself as a middle-ground regulator in the AI space. The UK is home to OpenAI and Google DeepMind HQs in London, attracting a wealth of talent and investment that the government is keen to protect. The government has flagged its desire for the UK to be the “intellectual and geographical” home for AI. 

  • In April, the government published its AI white paper, “A Pro-Innovation Approach to AI Regulation,” which proposed avoiding “heavy-handed legislation which could stifle innovation.” However, a backlash from both UK and global voices concerned about potential AI dangers prompted a change in rhetoric to safety and protections.
  • Prime Minister Sunak has committed to hosting a global AI Safety Summit in December 2023 to establish global cooperation on managing the rapid growth of the technology. The government has also set up the Foundation Model Taskforce (FMT) with £100 million funding, headed by tech entrepreneur Ian Hogarth, to build sovereign capabilities in foundation models to guide AI safety, research and development.

Key influencers:

  • The Department for Science, Innovation and Technology(DSIT), newly formed in 2023, takes the government lead, with acting Secretary of State Chloe Smith heading it while Michelle Donelan is on maternity leave. Within DSIT, the Office for AI is the lead on these issues. DSIT recently launched the FMT to lead development of sovereign capabilities and safety research. Sitting alongside the FMT, the Advanced Research and Invention Agency (ARIA) has led on AI and emerging tech work to this point and is headed by Matt Clifford, a tech founder and entrepreneur.
  • On the opposition side, the Shadow Secretary of State for Digital, Culture, Media and Sport, Lucy Powell MP, will be leading the Labour Party’s work on AI and tech alongside Alex Davies-Jones MP, Shadow Digital Minister. Darren Jones MP, Chair of the Business and Trade Committee, leads the Labour Digital Committee, which has fed into Labour’s tech policy work. 
  • In civil society and industry, TechUK is the trade body convening industry voices on AI with influential think tanks such as the Tony Blair Institute. Open AI’s Sam Altman and DeepMind’s Demis Hassabis both have the ear of the prime minister, whose tech adviser Henry de Zoete, has brought them in to talk with the PM. 

What to watch next:

  • While the summer recess will be a relatively quiet period, there are developments to watch. The Labour Party will release its long-awaited tech policy paper, likely in early September, to provide insights into regulating AI should it gain power at the next election. The DMCC and DPDI Bills will receive royal assent by Autumn, before the King’s Speech which may well introduce an AI bill following on from the white paper. 

For more information, contact our team:

James Meszaros (Washington, D.C.)

Oliver Drewes (Brussels)

Ella Fallows (London)

 

Subscribe to our newsletter

Let's connect! Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.