Summary

Artificial Intelligence governance and human rights

Anna Triponel

January 13, 2023
Our key takeaway: Artificial intelligence (AI) as a tool is making its presence felt beyond the traditional sectors of tech and finance. Increasingly, AI is being used across a wide number of businesses to accomplish untold different types of tasks and analysis. In many ways, this proliferation of emerging technology is a boon for human development and the environment—it can help promote better healthcare outcomes, improve agricultural outputs for greater food security, model climate impacts and beyond. However, the very innovation that drives AI also poses serious risks to human rights if the technology is not properly designed, deployed and monitored. Companies and investors should apply a human rights-based framework to their use of AI and other emerging tech, including: establishing a corporate-wide understanding that human rights are “a useful tool in the box rather than … a constraint on innovation”; building capacity and bringing human rights expertise into AI teams; setting AI-specific human rights policies and conducting human rights due diligence on its use; and ensuring that the use and outcomes of AI are transparent and remediable if people are harmed.

The Chatham House International Law Programme published AI Governance and Human Rights: Resetting the Relationship (January 2023), authored by Kate Jones:

  • How does Artificial Intelligence (AI) intersect with human rights?: While there is no single definition of AI, “it is a general term referring to machines’ evolving capacity to take on tasks requiring some form of intelligence. The tasks that AI performs can include generating predictions, making decisions and providing recommendations. … To learn, AI generally relies on synthesising and making inferences from large quantities of data.” Notably, “AI holds enormous potential to enable human development and flourishing. For example, AI is accelerating the battle against disease and mitigating the impact of disability; it is helping to tackle climate change and optimize efficiency in agriculture; it can assist distribution of humanitarian aid; it has enormous potential for improving access to, and quality of, education globally; and it can transform public and private transport.” AI is also becoming ubiquitous in some industries, like finance, and its use is increasing in virtually every type of business that benefits from data analysis—that is to say, nearly all of them. However, the use of AI also has the potential for serious impacts on human rights. For example, the Chinese government’s use of AI-based facial-recognition and other tech is being used to track and detain Uyghur populations. At the same time, unintentional consequences or byproducts can cause harmful uses of AI; for instance, companies using AI for decisions like loan-making, security, and healthcare may “[risk] embedding and exaggerating bias and discrimination, invading privacy, reducing personal autonomy and making society more, rather than less, unequal.” 
  • Human rights-based AI policies, standards and due diligence are urgently needed: The complex web of interconnections and the massive amount of data being used to design AI algorithms means that human rights governance and due diligence must be applied carefully and uniquely to understand potential human rights risks and impacts. As the paper points out, “[e]ven an AI tool designed with the intention of implementing scrupulous standards of fairness will fail if it does not replicate the complex range of factors and subtle, context-specific decision-making processes of humans. Unchecked, AI systems tend to exacerbate structural imbalances of power and to disadvantage the most marginalized in society.” The author underscores the important role that a human rights-based framework can play in identifying and mitigating risk, focusing on human rights as the “baseline for AI governance,” beyond the less specific standards of “ethics” or “responsible use” that currently proliferate the field. Human rights is “often overlooked” and “many sets of AI governance principles produced by companies, governments, civil society and international organizations fail to mention human rights at all. Of those that do, only a small proportion (around 15 per cent) take human rights as a framework.” That said, the paper acknowledges that HRDD for AI has unique challenges. For one, “AI’s capacity for self-improvement may make it difficult to predict its consequences.” In addition, “AI’s human rights impact will depend not only on the technology itself, but also on the context in which it is deployed. In light of both these factors, due diligence on AI applications that may affect human rights must be extensive and involve as wide a set of stakeholders as may be affected by the AI.” It must also occur frequently, given the constantly changing landscape and application of the data in use. 
  • Recommendations for the private sector: The report underscores that companies and investors, alongside other stakeholders, “should take steps to establish human rights as the foundation on which AI governance is built, including through inclusive discussion, championing human rights and establishing standards and processes for implementation of human rights law and remedy in case of breach.” Companies should (1) “Continue to promote AI ethics and responsible business agendas, while acknowledging the important complementary role of existing human rights frameworks”; (2) “Champion a holistic commitment to all human rights standards from the top of the organization. Enable a change of corporate mindset, such that human rights are seen as a useful tool in the box rather than as a constraint on innovation”; (3) “Recruit people with human rights expertise to join AI ethics teams to encourage multi-disciplinary thinking and spread awareness of human rights organization-wide, Use human rights as the common language and framework for multi-disciplinary teams addressing aspects of AI governance”; (4) “Conduct human rights due diligence and adopt a human rights-based approach to AI ethics and impact assessment. Create decision-making structures that allow human rights risks to be monitored, flagged and acted upon on an ongoing basis”; (5) “Ensure uses of AI are explainable and transparent, so that people affected can find out how an AI or AI-assisted decision was, or will be, made”; and (6) “Establish a mechanism for individuals to seek remedy if they are dissatisfied with the outcome of a decision made or informed by AI.” For their part, investors should “[I]nclude assessment of the implications of AI for human rights in ESG or equivalent investment metrics.”

You may also be interested in

This week’s latest resources, articles and summaries.