Association of Southeast Asian Nations (ASEAN)

Principles, Studies, & Recommendations

General

A practical guide for organizations in the region that wish to design, develop, and deploy traditional AI technologies in commercial and non-military or dual-use applications. The guide focuses on encouraging alignment within ASEAN and fostering the interoperability of AI frameworks across jurisdictions and includes recommendations on national-level and regional-level initiatives that governments in the region can consider implementing to design, develop, and deploy AI systems responsibly.

Responsible Government Organizations

Council of Europe (CoE)

Laws and Regulations

Council of Europe (CoE)

A legally binding treaty that sets out a comprehensive legal framework to ensure that AI systems respect human rights, democracy, and the rule of law. The convention establishes transparency and oversight requirements tailored to specific contexts and risks, including identifying content generated by AI systems. The convention will be opened for signature by CoE Member States as of September 5, 2024.

Principles, Studies, & Recommendations

Media/Journalism

The Guidelines outline the responsibilities of AI technology providers in the news media sector.

Responsible Government Organizations

G-7

Principles, Studies, & Recommendations

General
G-7 International Guiding Principles and Code of Code for Organizations Developing Advanced Artificial Intelligence (AI) Systems (Adopted October 30, 2023)

Sets forth 11 principles that developers should follow. The Code of Conduct, directed at academia, civil society, and the public and private sectors, is intended to promote safe, secure, and trustworthy AI worldwide by providing voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems. Organizations are encouraged to apply these actions to all stages of the lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems.

Hiroshima AI Process

IEEE Standards Association

Principles, Studies, & Recommendations

This recommended practice specifies governance criteria such as safety, transparency, accountability, responsibility and minimizing bias, and process steps for effective implementation, performance auditing, training and compliance in the development or use of artificial intelligence within organizations.

Responsible Organizations

International Organization for Standardization (ISO)

Principles, Studies, & Recommendations

An international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

Guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize AI can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions and describes processes for the effective implementation and integration of AI risk management.

Responsible Organizations

Organization for Economic Cooperation and Development (OECD)

Principles, Studies, & Recommendations

The Recommendation contains five high-level values-based principles and five recommendations for national policies and international co-operation. It also proposes a common understanding of key terms, such as “AI system” and “AI actors”.

This OECD paper offers an overview of the AI language model and NLP landscape with current and emerging policy responses from around the world. It explores the basic building blocks of language models from a technical perspective using the OECD Framework for the Classification of AI Systems. The paper also presents policy considerations through the lens of the OECD AI Principles.

This OECD tool is designed to help policy makers, regulators, legislators and others characterize AI systems deployed in specific contexts. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output.

Responsible Government Organizations

United Nations

Principles, Studies, & Recommendations

The objectives of the Framework are to:

  • provide a universal framework of values, principles and actions to guide Member States in the formulation of their legislation, policies or other instruments regarding AI, consistent with international law;
  • guide the actions of individuals, groups, communities, institutions and private sector companies to ensure the embedding of ethics in all stages of the AI system life cycle;
  • protect, promote and respect human rights and fundamental freedoms, human dignity and equality, including gender equality; safeguard the interests of present and future generations; preserve the environment, biodiversity and ecosystems; and respect cultural diversity in all stages of the AI system life cycle
  • foster multi-stakeholder, multidisciplinary and pluralistic dialogue and consensus building about ethical issues relating to AI systems;
  • promote equitable access to developments and knowledge in the field of AI and the sharing of benefits, with particular attention to the needs and contributions of LMICs, including LDCs, LLDCs and SIDS.

Responsible Government Organizations

Other

Principles, Studies, & Recommendations

Healthcare

Ten guiding principles jointly issued by the U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) to help promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML).

General

These Guidelines, published by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and other international government agencies, are aimed primarily at providers of AI systems who are using models hosted by an organization or are using external application programming interfaces (APIs). The Guidelines suggest considerations and mitigations in four key areas to help reduce the overall risk to an organizational AI system development process: secure design; development; deployment; and operation and maintenance. The Guidelines follow a ‘secure by default’ approach and are aligned closely to practices defined in the UK NCSC’s Secure development and deployment guidance, NIST’s Secure Software Development Framework, and ‘secure by design principles’ published by CISA, the NCSC and international cyber agencies.

This report expands upon the ‘secure deployment’ and ‘secure operation and maintenance’ sections of the Guidelines for secure AI system development and incorporates mitigation considerations from Engaging with Artificial Intelligence (AI). It is for organizations deploying and operating AI systems designed and developed by another entity.

Responsible Government Organizations