Artificial Intelligence: What’s in Play

Artificial intelligence (AI) technology is accelerating exponentially and dramatically changing how companies operate on a global scale. AI innovation and optimization is driving business processes and operations — and companies are dedicating more resources to developing, procuring or acquiring AI tools and strategies.

By Jim Meszaros

The global AI market was valued at $69 billion in 2021 and is forecast to grow to more than $422 billion by 2028, according to Zion Market Research. The use of AI in end-user sectors including healthcare, manufacturing and automotive in the United States, China, Japan, South Korea, and Australia is fueling this expansion.

Industry-wide AI-driven applications are being introduced in key business sectors such as automotives, insurance and financial services, logistics, healthcare, retail and ecommerce, advanced manufacturing and media and entertainment.

Here are several examples of how AI technologies are emerging in both the public and private sectors:

  • AI is used throughout the criminal justice system to identify people, for data management and to predict crime. There is a debate over the use of AI by law enforcement for uses such as facial recognition, crowd monitoring and crime prevention, with some experts claiming AI can discriminate against marginalized groups.
  • There are significant AI-driven military and intelligence applications, with the U.S. Department of Defense and intelligence agencies investing billions of dollars and making organizational changes to integrate AI into warfighting and intelligence operations, including in surveillance and reconnaissance, logistics, cyber operations and command and control.
  • AI-driven biometric technologies are currently legal for use by U.S. Government agencies such as the Department of Homeland Security (DHS) for certain antiterrorism activities, immigration control and airline passenger screening.
  • AI-driven advances in biotechnology and biomedical research hold the promise of longer and healthier lives, while raising ethical and policy challenges.
  • Companies are increasingly using AI to protect against cyberattacks on their business. AI data can track cyber threats and help predict when digital and data security attacks might occur, enabling companies to ramp up defense tools.
  • AI can play an important role in advancing and scaling up climate solutions because of its capacity to gather and analyze large, complex datasets around carbon emissions and climate impacts.

Legislators and policymakers are paying attention

There is an urgency to set regulations and guidelines before AI technologies create societal harm if misused.

The White House issued a set of AI guidelines last October as a blueprint for federal legislation or regulation. The National Institute of Standards & Technology (NIST) has issued a voluntary framework designed to help U.S. organizations manage AI risks that can impact individuals, organizations and society. The framework is focused on a set of trustworthiness considerations, such as the explainability and mitigation of harmful bias into AI products, services and systems.

Congress has held numerous hearings on AI. But there is no comprehensive federal legislation on AI and a divided Congress is unlikely to align around an AI ethics bill this year.

Instead, the U.S. has a patchwork of current and proposed AI regulatory frameworks and several states may consider AI regulation — most notably the states which have been first-movers on enacting data privacy laws.

Issues for policymakers

In the absence of regulations, are self-policing and voluntary standards sufficient or does the U.S. need new formal rules and laws designed to protect consumers and society from abusive harm?

For example:

  • What consumer rights are essential, such as the ability of consumers to opt out of AI-powered algorithms that can make decisions in areas such as lending, housing, health services, employment and education?
  • What level of AI transparency should be required, including the obligations of companies to tell consumers and others how AI-driven technologies are guiding their business operations and decisions?
  • What level of AI governance should be required, such as guidelines and rules for companies to minimize harm and eliminate biases to consumers, prevent mis/disinformation and similar impacts? What type of penalties should be imposed for noncompliance?
  • How should new laws and regulations be written to ensure flexibility to cover over-the-horizon AI applications and technologies?
  • How should governments balance national security interests with civil liberties — and what limits should be codified into law?
  • Because other countries (both allies and adversaries) are developing AI technologies, what diplomatic engagement is necessary to protect U.S. national security and economic interests?

While Congress may consider legislation this year, the Federal Trade Commission (FTC) already has some authority to act. The FTC is focused in these areas:

  • Making sure AI applications are using representative data sets and do not exclude particular populations.
  • Requiring testing AI before deployment — and periodically thereafter — to confirm it works as intended and does not create discriminatory or biased outcomes.
  • Ensuring AI tools and outcomes are explainable to consumers and regulators.
  • Creating accountability and governance mechanisms to document fair and responsible development, deployment and AI uses.
Jim Meszaros for TL content

Author

Washington DC | International consultant to governments, multinational corporations and foundations on global economic, trade, development and climate issues.

Managing risk & reputation in an election year

Throughout the course of the 2024 election cycle, our Global Elections Task Force has been providing data-driven election insights, analysis, and real-time counsel to our team and clients. Get timely insights on our Election Matters home page.