The OECD has published the OECD Due Diligence Guidance for Responsible AI (19 February 2026) which provides guidance for companies on how to apply the OECD MNE Guidelines on Responsible Business Conduct and the OECD AI Principles when developing or using AI tools.
Human Level’s Take
- AI governance is about applying human rights and environmental due diligence to AI development and use. It is not just about technical risk management. The OECD Guidance makes clear that AI risks should be addressed through the lens of responsible business conduct and risk-based due diligence across the full AI value chain: (1) suppliers of AI inputs, (2) developers of AI systems, and (3) companies using AI systems.
- At a time where there is a proliferation of laws and standards on responsible AI use and risk management, the OECD aims to bring these frameworks under the umbrella of responsible business conduct – helping teams navigating the growing number of parallel expectations, and promoting policy coherence and interoperability across jurisdictions.
- Examples for implementation are listed for each step of the due diligence process, illustrating specific ways to implement due diligence when it comes to AI development or use – and along the value chain. Interestingly, these are presented as examples and not as recommendations – recognising that the sector and risk management practices are evolving rapidly.
- Stakeholder engagement is central to the guidance: throughout the due diligence process there are suggestions for companies to engage with workers, civil society and other potentially affected rightsholders along the AI lifecycle.
- The Guidance also frames companies’ responsibilities within a broader eco-system: investors, financial institutions, governments and other actors adjacent to the AI value chain also have a role to play in encouraging and reinforcing responsible practices.
- At Human Level, we will publish a short briefing note in the coming weeks focused specifically on how companies can AI apply due diligence to the AI systems used in their operations, procurement and workforce management!
Some key takeaways:
- Due diligence and responsible business apply to AI: The Guidance builds on the OECD MNE Guidelines to guide companies on how to apply risk-based due diligence to the development and use of AI systems – providing recommendations for how companies in the AI value chain can understand and address the human rights and environmental risks related to the development and use of AI. It also draws on existing international and national AI-specific risk management frameworks, seeking to create policy coherence and interoperability between these different expectations. In it, the OECD makes it clear that achieving responsible AI is not limited to AI developers (Group 1 in the Guidance). Data suppliers, cloud and compute providers, hardware manufacturers, financial actors (Group 2) and companies using the AI systems (Group 3) are also responsible for identifying, preventing and mitigating risks to people and planet. Companies across the value chain who want to comply with responsible business conduct expectations will need to identify and address adverse impacts that they cause or contribute to through their own AI use or development, as well as those they are directly linked to through business relationships across the AI value chain. Importantly, the Guidance highlights that engagement with workers, workers’ representatives and trade unions, affected communities and other stakeholders who could be impacted is essential for effective due diligence.
- Why apply due diligence to AI development or use: The OECD frames AI due diligence as both a responsibility and a strategic advantage for companies in the AI value chain. Specifically, applying RBC-based due diligence to AI helps companies by: (1) bridging and closing gaps in existing AI frameworks, particularly related to stakeholder engagement and access to remedy; (2) building trust with investors, customers, regulators and the public; (3) strengthening market access, as AI risk management is increasingly embedded in procurement and investment decisions; (4) reducing long-term costs through early compliance with evolving regulations across jurisdictions; and (5) supporting resilient AI value chains, reducing exposure to reputational impacts, data risks and governance failures.
- How to conduct due diligence for AI-related risks: The Guidance applies the OECD’s six-step due diligence framework to AI systems, with tailored considerations for suppliers, developers and users:
- Step 1 – Embed responsible business conduct into policies and management systems: Integrating AI-related human rights and risk considerations into policy commitments, governance structures, oversight mechanisms, procurement, IT and development processes. Assigning responsibility, for AI risk management cross-functional teams and communicating expectations to business partners are two key examples of how to do this.
- Step 2 – Identify and assess actual and potential adverse impacts: Conducting risk human rights and environmental assessment and prioritisation based on: (1) the interaction of AI systems; (2) the nature of the AI system, use-case or product that it is used in; (3) the type of user or use of the AI system and the business objectives of the user; (4) sources of data inputs, software, physical extension, and human-in-the-loop aspects of the system; (5) the geographic/socio-economic/political context; (6) the competency and scientific validity of the AI system; and (7) foreseeable misuse or unintended consequences. Where resources are limited, companies may use escalation mechanisms to prioritise high-risk systems for deeper assessment.
- Step 3 – Cease, prevent and mitigate adverse impacts: Stopping activities causing harm and implementing mitigation actions. These may include – among others – improving the transparency and traceability of data and development decisions, enabling the explainability of AI outputs (ways of knowing the reasons for a decision made), engaging with affected stakeholders, updating systems post-deployment and using leverage with business partners.
- Step 4 – Track implementation and results: Monitoring the effectiveness of mitigation measures, reassessing evolving risks regularly, assessing the effectiveness of stakeholder engagement processes, and integrating lessons learned.
- Step 5 – Communicate how impacts are addressed: Disclosing relevant information about the AI-related risks and the due diligence measures taken, including significant impacts identified, mitigation measures in place, system capabilities and limitations.
- Step 6 – Provide for or cooperate in remediation: Where companies cause or contribute to harm, providing or cooperating in remedy, consistent with the MNE Guidelines. The OECD National Contact Points (NCPs) remain the accountability mechanism for alleged breaches of the MNE Guidelines, including in AI contexts.