Existing and Upcoming AI Regulations in the USA


The rapid pace of technological advancement, especially in artificial intelligence (AI), presents significant challenges for global regulators.

In the United States, efforts to regulate AI have been slow, even as the technology evolves quickly across various industries.

Current AI Regulation Landscape

In the U.S., AI is governed by a mix of federal and state governments, industry self-regulation, and court decisions.

However, these approaches have limitations.

Industry self-regulation can lead to conflicts of interest, while overlapping state regulations can cause compliance issues.

Courts are also constrained by existing laws when resolving AI-related disputes.

The U.S. leverages existing regulatory tools and is developing new regulations to manage AI risks.

Companies operating in or doing business with the U.S. need to understand the country’s AI regulatory framework, which includes actions by the executive branch and legislative activities in Congress.

Export Controls on AI

AI has significant military and intelligence applications, such as autonomous vehicles and data analysis.

Currently, U.S. export controls do not specifically target AI as a broad category.

Instead, they regulate components like semiconductors, technology for embedding AI, and manufacturing equipment, most of which fall under the less restrictive Export Administration Regulations (EAR99).

Despite AI’s commercial potential, concerns about its misuse have prompted discussions about stricter controls, potentially under the International Traffic in Arms Regulations (ITAR).

However, defining AI precisely remains a challenge for regulators.

Outbound Investment in AI

In 2023, President Biden issued Executive Order (EO) 14105, directing the Treasury to restrict certain investments in countries of concern, focusing on sensitive technologies like AI.

These regulations, expected by 2025, aim to prevent U.S. resources from aiding foreign competitors in critical areas.


The Committee on Foreign Investment in the United States (CFIUS) reviews foreign investments in U.S. businesses, including those involving AI.

Investments in critical technology or sensitive personal data may trigger mandatory CFIUS filings, especially after President Biden’s EO 14803, which emphasized key technologies like AI in risk assessments.

Executive Orders on AI

Several executive orders have been issued to manage AI development:

  • EO 14110 (October 2023): Promotes safe and trustworthy AI development and requires government agencies to develop AI regulations, engage with the private sector, and support innovation.
  • EO 14117 (February 2024): Restricts access to Americans’ sensitive data by countries of concern, highlighting the national security risks posed by AI.

Congressional Actions on AI

Congress is also active in AI regulation.

The CHIPS and Science Act supports high-tech policies, and numerous bipartisan hearings have addressed AI issues like privacy and national security.

The House and Senate have formed task forces and committees to investigate and develop AI-related legislation.

Upcoming Presidential Election and AI

AI regulation could change significantly depending on the outcome of the upcoming presidential election.

Former President Donald Trump has pledged to overturn current AI executive orders if reelected, creating uncertainty about future national security and industrial policies.

Key Takeaways

  • The U.S. is using existing tools like export controls and CFIUS to manage AI risks.
  • New regulations are being developed to address outbound investments and data protection.
  • A coordinated government approach is being implemented to define roles and responsibilities for AI regulation.
  • Congressional activities are increasing to investigate and legislate on AI issues.

Understanding these regulatory developments is crucial for companies involved in AI to navigate the evolving landscape in the United States.