Summary

Artificial Intelligence (AI) and Human Rights

Anna Triponel

July 11, 2025

The Working Group on the issue of human rights and transnational corporations and other business enterprises (Working Group) published its report on how the UN Guiding Principles on Business and Human Rights (UNGPs) apply to the procurement and deployment of AI systems by governments and companies.

Human Level’s Take:
  • As AI is increasingly used in business operations, products and services - from customer support and inventory management to targeted advertising and research and development - it brings with it serious human rights implications. Yet many companies still lack the processes needed to identify, prevent, and address the human rights risks and impacts associated with AI use. For example, AI-powered monitoring in workplaces can increase pressure on workers and reduce their ability to challenge opaque labour practices
  • The UN Guiding Principles on Business and Human Rights provide a framework for companies to procure and deploy AI in a rights-respecting manner
  • So what can companies do to ensure they respect human rights in relation to AI procurement and deployment?
  • The Working Group shares a number of key actions: (1) commit at the highest level, including adopting a human rights policy specific to AI; (2) conduct robust human rights due diligence, including stakeholder engagement, to identify and address actual and potential impacts; (3) encourage peer learning and cross-sector collaboration to share best practices and strategies;(4) ensure transparency and accountability across all AI processes, especially in data handling; (5) integrate human rights criteria into procurement processes for AI systems; (6) build internal capacity and awareness, involving experts, civil society and academia; (7) inform affected individuals clearly about AI use and obtain prior, informed consent; (8) provide access to grievance mechanisms for timely and effective remedies; and (9) use leverage in value chains to promote rights-respecting practices throughout the AI lifecycle.

Some key takeaways:

  • The interconnections between AI and human rights: Companies are increasingly embedding AI into their day-to-day operations, products and services, including in customer support, inventory management and value chain optimisation, robotic process automation, and targeted advertising and research and development. AI is also being deployed for human resources, for example, in recruitment and management of employees. More recently, it is being used to analyse risks and regulatory compliance and conduct human rights due diligence processes. However, given the rapid uptake of new technologies, many companies are procuring and deploying AI without implementing adequate mechanisms to identify, prevent, mitigate and account for how they address adverse human rights impacts. For instance, workers operating in environments with AI-driven monitoring systems may experience constant pressure to meet performance targets, struggle to challenge rigid and opaque labour processes imposed by algorithms and suffer a loss of autonomy. Furthermore, AI is using a lot of water and energy, as well as, increasingly, critical minerals, which causes environmental impacts and impacts the right to a clean, healthy and sustainable environment.
  • The corporate responsibility to respect human rights: Companies have a responsibility to respect human rights in relation to the procurement and deployment of AI in alignment with the UNGPs. This also includes enabling or providing access to remedies to those impacted by human rights impacts. The Working Group, however, notes that a key challenge with remedy is a lack of transparency regarding the procurement and deployment of AI systems. Individuals and communities may struggle to understand how their human rights are affected by AI without awareness of AI system deployment and its potential human rights impacts. Other challenges also include the difficulties of quantifying and documenting harm, and existing access to remedy mechanisms lacking adequate resources and enforcement powers. For instance, the AI Act of the European Union provides for the right to lodge complaints to the European AI Office  but it remains to be seen how effective compliance and accountability will be. The Working Group recognises that non-State-based grievance mechanisms are key to ensuring access to remedy. Companies must establish or participate in effective operational-level grievance mechanisms that provide affected individuals and communities with avenues for early and direct resolution of grievances related to human rights impacts. In instances where companies have existing operational-level grievance mechanisms, these frameworks can be adapted to specifically address the context of AI systems.

  • What can companies do? To meet their responsibility to respect human rights in AI procurement and deployment, the Working Group recommends that companies:
  1. demonstrate their commitment to respect human rights at the highest level, including by adopting a human rights policy.
  2. identify the AI systems being used and establish human rights policies for AI deployment, including commitments to and the implementation of the UNGPs.
  3. conduct thorough human rights due diligence to identify, prevent, mitigate and account for potential and actual human rights impacts of the AI systems that they procure and/or deploy, including through meaningful stakeholder engagement. This also involves carrying  out regular assessments and updates to ensure that human rights due diligence is ongoing and responsive to emerging potential adverse impacts throughout the lifecycle of the AI systems.
  4. foster peer-learning spaces and cross-sectoral cooperation where businesses and other stakeholders can share best practices, lessons learned and strategies for integrating human rights considerations into AI procurement and deployment.
  5. ensure transparency and accountability in all AI-related processes, particularly in how data are collected, processed, used and disposed of during deployment.
  6. implement robust data-protection mechanisms, ensure that data are collected in a transparent and non-discriminatory manner, and ensure that prior informed consent is obtained for all data storage and usage.
  7. promote cross-departmental, multi-disciplinary capacity-building and awareness-raising on the human rights implications of AI systems, including in consultation with experts, civil society organisations and academia.
  8. integrate human rights requirements into AI procurement processes and, when issuing tenders, require adherence to international human rights standards, the conduct of thorough human rights due diligence and transparent reporting on potential adverse human rights impacts.
  9. establish board-level oversight bodies on human rights issues related to AI, with a multi-disciplinary governance framework that includes cross-departmental collaboration.
  10. ensure that affected individuals have accessible, explainable and understandable information about the AI systems that they interact with and request prior consent before deploying AI systems.
  11. establish or participate in effective operational-level grievance mechanisms, allowing for avenues for early and direct provision of remedies; and
  12. leverage influence within value chains and business relationships to encourage rights-respecting business conduct with regard to the life cycle of AI systems.

You may also be interested in

This week’s latest resources, articles and summaries.
No items found.