The United States has issued regulations, recommendations, and guidance on Artificial Intelligence (AI). Companies subject to the laws of the United States should be familiar with all relevant AI-related regulations, recommendations, and guidance, including those listed below.

State Regulations, Recommendations, Guidance, and Other Resources

For laws, regulations, and/or other resources issued by U.S. state government authorities, click on a state in the map to view individual state pages.

Federal Regulations, Recommendations, Guidance, and Other Resources

The President’s Executive Order provides for a coordinated, federal government-wide approach to governing the development and use of AI safely and responsibly.

The Office of Management and Budget (OMB) memorandum requires agencies to follow minimum practices when using safety-impacting AI and rights-impacting AI, and establishes a series of recommendations for managing AI risks in the context of Federal procurement.

The Office of Management and Budget (OMB) memorandum requires agencies to create or update acquisition policies, procedures, and practices to reflect new responsibilities and governance for AI, as established by the OMB. 

The voluntary AI RMF is designed to equip AI actors with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.

The guidance applies to the development and deployment of Generative AI (“GenAI”) technologies, including large language models and cloud-based services. It provides recommendations and obligations for organizations involved in designing, developing, using, or evaluating AI systems to manage risks and ensure trustworthiness.

The Guidelines focus on secure software development practices for GenAI and dual-use foundation models and are applicable to AI model producers, AI system producers, and AI system acquirers, addressing the entire AI model development lifecycle, including data sourcing, design, training, fine-tuning, evaluation, and integration into software systems. The guidelines build off of the Secure Software Development Framework version 1.1. and provide recommendations and considerations, such as securing code storage, managing model versioning and lineage, and clarifying shared responsibilities among organizations.

The Guidance applies to all sectors involved in AI related activities, including standards development organizations, industry, academia, civil society, and foreign governments. It covers AI standards across all scopes, both horizontal (cross-sectoral) and vertical (sector-specific). The Guidance recommends several actions, including engaging in standards work, encouraging diverse stakeholder participation, and promoting global alignment on AI standards. The Guidance priorities scientifically sound, accessible AI standards that reflect the needs of diverse global stakeholders.

Guidance on practitioner use of AI
Inventorship 
Subject matter eligibility
Compliance with 35 U.S.C. 112

Guidance on disclosure requirements for computer-implemented functional claim limitations. 

General guidance for examining means plus function (35 U.S.C. 112(f)) limitations. MPEP 2181(II)(B) provides guidance on the description necessary to support a claim limitation that invokes 35 U.S.C. 112(f).

Guidance discusses functional limitations that do not invoke 35 USC 112(f).

Artificial Intelligence Patent Dataset (AIPD)

United States patents and pre-grant publications that include AI

PTAB and USPTO Petition Decisions Pertaining To AI

This guidance provides background on tenant screening companies, explains how the Fair Housing Act applies to both housing providers and tenant screening companies, describes common fair housing issues, and suggests how to avoid discriminatory screenings. This guidance covers screening practices with varying levels of human involvement and automation, including machine learning and other forms of AI.

The guidance explains how the Fair Housing Act applies to the advertising of housing, credit, and other real estate-related transactions through digital platforms. In particular, it addresses the increasingly common use of automated systems, such as algorithmic processes and AI, to facilitate advertisement targeting and delivery.

This report provides insights into the current state of AI-related cybersecurity and fraud risks in the financial services sector, and best practice recommendations for managing those risks.

This report follows Treasury’s issuance of its 2024 Request for Information on the Uses, Opportunities, and Risks of AI in Financial Services.  The Report highlights increasing AI use throughout the financial sector and underscores the potential for AI to broaden opportunities while amplifying certain risks, including those related to data privacy, bias, and third-party providers.

This Office regularly posts articles examining issues involving technology, consumer protection, and competition. 

The rule defines materially misrepresenting that a reviewer exists as an unfair or deceptive act, which would cover AI-generated fake reviews. Among other things, the rule prohibits selling or purchasing fake consumer reviews or testimonials, buying positive or negative consumer reviews, certain insiders creating consumer reviews or testimonials without clearly disclosing their relationships, creating a company-controlled review website that falsely purports to provide independent reviews, certain review suppression practices, and selling or purchasing fake indicators of social media influence.