How global sustainability frameworks can inform AI governance

Anna Triponel

January 9, 2026

Authors from Dunstan Hope Consulting and Google recently published the paper “Ten insights from other domains that inform responsible AI frameworks” (October 2025). The authors argue that responsible AI governance should build on established human rights and responsible business frameworks and practices.

Human Level’s Take
  • As sustainability and human rights professionals begin to dip their toes into AI governance and management of AI-related risks and impacts, this paper is a timely reminder that many of the tools and frameworks already used to manage human rights, social, and environmental risks are directly applicable to AI.
  • The authors emphasise the value of grounding responsible AI development and use in international human rights standards and responsible business conduct frameworks including the UN Guiding Principles (UNGPs) and the OECD MNE Guidelines. One key benefit being that these frameworks are already widely understood, robust, and in use across jurisdictions and sectors.
  • The paper warns against relying only on new AI governance frameworks to manage AI risks, since this could create inefficient, duplicate and parallel risk management pathways that are not connected to other risk management systems and responsible business conduct frameworks. In short: there is no need to reinvent the wheel. However, some key distinctions do need to be made for AI risk management in terms of how to mitigate and prevent risks.
  • The authors make ten key recommendations for how to use existing responsible business frameworks and practices to manage AI-related risks. (1) Human rights instruments can and should be used as a social risk taxonomy - there’s no need to use other risk taxonomies for impacts on people. (2) Frameworks for responsible business conduct can be used as the basis for AI risk management. (3) High-level risk management processes and technical requirements serve different purposes; practitioners should know when to use each. (4) Collaboration across companies and systems is essential for AI risk management, as many risks are systemic. (5) Risk assessments must be ongoing and embedded into technology development and deployment, and not a one-off exercise. (6) AI reporting and disclosure can evolve in alignment with sustainability and financial reporting. (7) Meaningful stakeholder engagement requires expertise and investment, especially since the scope of stakeholders impacted could be large. (8) Stakeholder engagement should not be overly constrained by regulation and companies should be allowed flexibility in their approach. (9) AI requires adaptive, real-time risk management approaches, such as is the case with cybersecurity. And (10) AI opportunities for social development can be pursued together with risk management.
  • For sustainability and human rights professionals, this paper is a reassuring signal. Much of what you already know (governance, due diligence, stakeholder engagement, remediation and reporting) is directly applicable to risks to people associated with AI use or development. But at the same time: the work needs to take place.

Some key takeaways:

  • Why integrate AI into existing risk management approaches? As AI rapidly evolves, the ecosystem of AI-related principles, guidelines, standards, and regulations is expanding just as fast. Governments, multilateral institutions, industry groups, individual companies and academics are all producing and publishing frameworks to ensure the responsible development and use of AI systems. These include, for example, the European Union’s AI Act and Digital Services Act, the OECD AI Principles, the U.S. NIST AI Risk Management Framework and the UN Global Digital Compact. Many of these frameworks recognise that AI can lead to new human rights and social risks, and propose risk management principles and requirements. However, the authors argue, AI also shares similarities with other technologies and business activities. Therefore, compliance, due diligence, risk assessment, remediation and disclosure expectations applying to other types of human rights and social risks and impacts – captured in human rights responsible business conduct frameworks – can also be used and applied to manage AI risks.
  • How AI can learn from other risk management frameworks? The authors highlight that the value of using well-established international human rights and responsible business frameworks is that these already offer a shared terminology that both developers (usually tech companies) and deployers of AI systems (usually non-tech companies) already use or can use. In addition, these frameworks operate across sectors and regions, their methodologies are general and can be tailored to the AI space, and they already integrate meaningful stakeholder engagement expectations – all beneficial to AI development and deployment. They also argue that using these established standards can prevent fragmentation of company efforts and reduce the number of “similar-but-slightly-different” regulatory and voluntary approaches to managing human rights risks.
  • Ten insights for responsible AI frameworks. The paper identifies ten practical insights that responsible AI frameworks and companies can integrate as part of their AI governance:
    1. International human rights instruments provide a reliable and durable risk taxonomy. International recognition and decades of application and interpretation make human rights a credible, comprehensive foundation for identifying and assessing AI-related risks to people. In addition, they argue that human rights can be framed as being inclusive of (rather than separate from) other social outcomes, such as democracy, rule of law, and public health.
    2. Foundations for AI risk management can be borrowed from frameworks for responsible business conduct. Frameworks such as the UN Guiding Principles and OECD Guidelines can increase the efficiency and effectiveness, interoperability and collaboration of AI risk management systems - enhancing companies’ efficiency and allow them to focus resources on strategic priorities.
    3. Distinguish between high-level processes versus detailed requirements. To manage AI-related risks, both broad risk management principles and detailed requirements for technology-specific risks are needed. Companies will need to know when to apply general or specific frameworks.
    4. Collaboration and comparison across systems are required to address risk successfully. Many AI risks are systemic and cannot be mitigated by any single actor, as they are systemic. Shared responsibility and complementary roles – especially between developers and users – are needed to address the risks. The paper also highlights the importance of multi-stakeholder initiatives in this space.
    5. Risk assessment and due diligence should be an ongoing priority, not a moment-in-time activity. Due diligence should be embedded into product development and updated over time, instead of being a one-off exercise. Risk management should also be supported by feedback mechanisms from users and access to grievance mechanisms for remedy.
    6. Reporting and disclosure of AI risks by companies are evolving. While reporting processes and indicators for AI are still evolving, lessons can be drawn from financial, non-financial and sustainability reporting about how to develop meaningful indicators and report about risks in a way that integrates quantitative and qualitative information.
    7. Meaningful and effective stakeholder engagement requires significant investment in scale and specialist skills. To succeed, companies will need to invest in those skills, gather interdisciplinary expertise and rely on public processes and expert dialogue – as well as on closed-door and direct engagement.
    8. Stakeholder engagement strategies should not be underestimated or constrained by regulation. Flexible, multi-track approaches to stakeholder engagement are needed, especially given the global reach of AI systems. Regulation should continue to allow that flexibility.
    9. Common assumptions about risk management should be challenged. Although general sustainability frameworks do apply to AI, some adjustments are needed. Real-time risk management like in cybersecurity is also needed, as well as careful management of ethical trade-offs, such as between the rights to freedom of expression, privacy and safety.
    10. The risks and opportunities of AI systems for people and society can be addressed together. While, as we know from the UNGPs, social benefits cannot offset harms, responsible AI management should not be disconnected from the goal of supporting human rights and sustainable development.

You may also be interested in

This week’s latest resources, articles and summaries.
No items found.