As critical industry sectors such as energy, financial services, and healthcare continue to use AI in new ways, the U.S. federal government is stepping in with guidance to enhance AI-related safety and security. The Department of Homeland Security (DHS) in November released “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” a voluntary framework developed in collaboration with industry leaders that provides tailored recommendations for key players in the AI ecosystem to protect critical infrastructure (the “DHS Framework”).
Here are some key takeaways regarding the framework:
Below, we provide a summary of the DHS Framework, a breakdown of how it fits into the landscape of existing AI frameworks, and our analysis of its outlook under the second Trump administration.
The DHS Framework provides recommendations for each layer of the AI supply chain to ensure that AI is deployed safely and securely in U.S. critical infrastructure. The DHS Framework is created in consultation with the Artificial Intelligence Safety and Security Board, an advisory committee that was established by DHS Secretary Alejandro N. Mayorkas in response to President Biden’s 2023 executive order (EO) on the development and use of AI. The board’s members include the CEOs of leading technology and critical infrastructure companies, as well as members of civil society.
Within the U.S.: The DHS Framework is the first AI framework specific to U.S. critical infrastructure. In 2023, the National Institute of Standards and Technology (NIST) published its AI Risk Management Framework, which is a higher-level educational tool, meant to be used by all kinds of organizations to frame the risks involved in their use of AI and to put in place governance mechanisms for AI. The DHS Framework builds on this by setting out specific measures that key entities should take to protect critical infrastructure in relation to AI.
Within the EU: The EU has taken a more aggressive stance on AI governance with the EU AI Act, which entered into force in August 2024 and will be fully applicable within another two years. The EU AI Act assigns AI use cases to one of three categories based on their risk levels, setting specific requirements for each. High-risk applications, which include AI systems deployed in critical infrastructure, are subject to strict obligations before they can be put on the market, such as ensuring adequate risk assessment and mitigation systems, traceability of results, and appropriate human oversight measures. While it places a similarly high emphasis on the risks involved and the need for safety measures in the use of AI in critical infrastructure, the DHS Framework is a voluntary framework. As the U.S. is home to so many key players in the AI industry, the federal government has generally taken a lighter-touch approach to regulating AI so far, opting in many cases for collaborative industry guidelines.
The DHS Framework identifies three main categories of AI-related safety and security vulnerabilities in critical infrastructure: (1) attacks using AI; (2) attacks targeting AI systems; and (3) AI design and implementation failures. To mitigate these vulnerabilities, the framework assigns voluntary responsibilities for the safe and secure use of AI in U.S. critical infrastructure across five key roles:
The DHS Framework evaluates these roles across five responsibility areas: securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact for critical infrastructure. Finally, the DHS Framework recommends actions to enhance safety and security for each of the key stakeholders involved in supporting the development and deployment of AI in critical infrastructure. For many of its recommendations, the DHS Framework cites technical resources that provide further specifics on their implementation. The DHS Framework’s recommendations include, for example:
The DHS Framework references the same 16 sectors of the economy that DHS’s Cybersecurity and Infrastructure Security Agency (CISA) defined as critical infrastructure when promulgating cyber incident reporting rules earlier this year, including the communications sector, energy sector, and financial services sector. By providing recommendations for such a broad group of organizations, DHS is making clear that it considers a large portion of the private sector to be “critical infrastructure,” and is encouraging these entities to address risks accordingly.
Protecting U.S. critical infrastructure by securing supply chains from foreign participation was an area of focus in President-elect Trump’s first term, and securing strategic independence from China and bringing critical supply chains to the U.S. were part of Trump’s 2024 platform. Accordingly, the Trump administration may elect to carry these efforts forward.
More importantly, the DHS Framework provides the private sector with actionable recommendations that they can choose to adopt directly. Industry participation in the development of the DHS Framework shows a desire on the private sector’s part to collaborate on standards and to have AI safety measures in place. Whether through continued government directives, or with the private sector picking up the baton on its own, the DHS Framework’s recommendations may become industry standard.