Summary

Responsible AI in practice

Anna Triponel

April 24, 2026

Thomson Reuters Foundation and UNESCO released Responsible AI in Practice: 2025 Global Insights from the AI Company Data Initiative (March 2026), drawing on the world’s largest dataset on corporate AI adoption (100,000 data points across 2,972 companies, 11 sectors and six regions).

Human Level’s Take:
  • On the ‘must do’ list for any company developing or using AI? Ensuring responsible AI systems to help prevent and mitigate human rights risks and impacts linked to systems and products. This means that closing the current gap between commitments to responsible AI and their implementation is essential.
  • However, Thomson Reuters Foundation and UNESCO point to a disconnect between companies’ high-level strategies and operational practices, suggesting that companies need to translate principles into concrete governance measures, including clear accountability structures and operational policies.
  • In contexts where binding laws remain limited or uneven, the findings of the report indicate that corporate self-regulation can play a significant role. This reinforces the importance of adopting recognised voluntary AI frameworks and embedding them into internal governance, risk management and decision-making processes.
  • The analysis also shows that AI maturity varies considerably across sectors, with some industries like tech, communications and finance demonstrating more advanced adoption and governance practices. This creates an opportunity for business leaders to learn from established approaches, while also drawing on identified risks and impacts to inform more responsible implementation.
  • At the same time, human rights and environmental considerations are not yet consistently integrated into AI governance and risk management. This points to the need for companies to more explicitly incorporate human rights due diligence and environmental impact assessments alongside technical and operational risk processes.
  • Differences in workforce preparedness are another key finding, with significant variation across sectors. The data suggests that companies can strengthen readiness by investing in targeted re-skilling and up-skilling efforts aligned to their specific context and level of AI adoption.
  • The report lays out a checklist of considerations for investors to assess responsible AI adoption, which can also serve as a roadmap for companies. This includes key steps like implementing formal policies with strong leadership oversight; leveraging human rights impact assessments to detect risks early on; strengthening transparency on how AI systems are used in practice; and improving governance and assurance of data quality.
  • Also essential are workforce protections, like upskilling, reskilling and assessing risks to workers’ rights from AI use. And throughout the lifecycle of deployment, oversight needs to be baked into the implementation process.

Some key takeaways:

  • Adoption is widespread but uneven across sectors and regions: The report finds that AI adoption is widespread across the 2,972 assessed companies, but uneven in both scale and maturity. Sectors such as information technology, financial services and telecommunications show higher levels of adoption and more developed governance signals, while sectors including energy, materials and some consumer-facing industries demonstrate lower levels of disclosed AI integration. Regionally, companies in North America and Europe tend to report more advanced AI strategies and governance practices, whereas firms in regions such as Africa and parts of Latin America and Asia-Pacific show lower levels of disclosure and implementation. These differences highlight that while AI is being adopted globally, the depth of deployment and supporting governance structures varies considerably across sectors and regions.
  • Strategy outpaces implementation, leaving a widening information gap: The report identifies a widening information and transparency gap between the rapid deployment of AI and the visibility of how it is governed in practice. While many companies are adopting AI and publicly disclosing high-level strategies (44% report having an AI strategy), there is limited evidence of how these are operationalised, with only 31% able to demonstrate dedicated governance resources and just 2.7% maintaining tools such as AI model registries. At the same time, nearly 90% of companies have not publicly committed to a recognised AI governance framework, only 13% report policies ensuring human oversight, and just 2.3% provide formal mechanisms for complaints or redress. Overall, the findings indicate that while AI adoption and high-level disclosure are increasing, the systems needed to make governance visible, accountable and measurable are not developing at the same pace.
  • What responsible AI use looks like: The report highlights that responsible AI is becoming increasingly urgent as AI systems are rapidly moving from experimentation to widespread deployment across business functions. As companies scale AI, its impacts are no longer confined to technical performance but are increasingly shaping community outcomes and influencing access to jobs, services and resources, while raising concerns about inequality and the distribution of its benefits and risks. This dynamic makes responsible AI a current operational priority rather than a future consideration, requiring companies to integrate risk management, transparency and accountability into existing business processes as AI adoption continues to expand. What are companies expected to do? The report offers guidance for investors engaging with companies, providing a roadmap for companies to follow. This includes steps like putting in place a formal AI strategy or guidelines with leadership oversight and dedicated implementation resources, as well as establishing concrete audit and traceability processes, user feedback mechanisms, and human rights impact assessments that can detect risks before rollout. It also includes strengthening transparency around how AI systems are used in practice, clarifying who is accountable, how decisions are made, and how issues are escalated and remediated. Other key action areas include improving governance of data quality and vendors, workforce protections (upskilling, reskilling and ensuring AI tools don’t infringe on workers’ rights), and oversight structures to ensure that AI risks are actively managed throughout the lifecycle of deployment.

You may also be interested in

This week’s latest resources, articles and summaries.
No items found.