Generative AI and Human Rights

Anna Triponel

July 7, 2023
Our key takeaway: Generative Artificial Intelligence (AI) refers to systems such as OpenAI’s ChatGPT “that are able to generate new content or predictions based on vast amounts of data” and deliver this to people in a timely and comprehensive manner. The speed with which these AI systems are evolving has become the subject of concern among tech leaders and policy makers within and across jurisdictions. The human rights impacts that this development brings such as privacy breaches has been well-documented. Business Fights Poverty takes a different approach and looks at AI risks and opportunities in the context of vulnerable people and communities. While AI can bring social benefits, for instance, in improving the quality of healthcare, it can also present significant risks if left unregulated. These include replacing jobs in certain industries such as oil and gas and perpetuating existing racial and gender biases. What can companies do? Companies can conduct human rights due diligence of AI in their operations and value chains using the UN Guiding Principles as the operational framework. They can also leverage their influence to engage in policy discussions supporting increased oversight of AI. They can take climate action through a just transition lens in the context of new technologies and automation.

Business Fights Poverty released Generative AI and Social Impact: The Role of Business (July 2023):

  • Fast-evolving growth in generative AI and concerns on their impacts on people: The report highlights three trends that have shaped the discussion around Artificial Intelligence (AI). These include the: 1) “Rapid growth and fast-evolving capabilities of generative AI models” which means that “[n]ew models are no longer limited to text inputs and outputs.” An example is GPT-4 and ChatGPT which can “autonomously ask itself the series of questions needed to deliver a task on its own”; 2) “Growing concern about the social, economic, political, national security and existential implications of generative AI.” This has led to tech leaders publicly expressing their concerns in May 2023 including Geoffrey Hinton, otherwise known as the “Godfather of AI.” In response to growing concerns, several jurisdictions are looking at regulating the sector and the UN is developing a Global Digital Compact to standardise principles of responsible AI across the world; 3) “Emerging understanding of the negative and positive social impact of generative AI for vulnerable people and communities.” There is growing recognition of the risks and opportunities that AI can pose to these communities. For instance, AI can improve healthcare outcomes but also lead to major job displacement. AI reflects and exacerbates “deeper systemic inequities, such as income, gender and race” and much has yet to be done to ensure a just transition in the context of automation and decarbonisation.
  • Generative AI impacts people’s lives, livelihoods and access to learning: The report highlights both the positive and negative impacts of AI on human rights. When it comes to people’s lives in the context of health and safety, AI can positively transform healthcare. This includes “accelerating medical research”, “assisting clinical decision- making to enable early detection and diagnosis” and “[increasing] access to health information and services.” AI also adversely impacts people’s health and safety because of “[e]xisting racial and gender biases in AI systems.” This has led to racial profiling in law enforcement and racial biases in clinical decision-making. AI can also affect people’s jobs and incomes. While it can help boost organisations’ productivity, it can lead to significant job losses with “300 million full-time jobs” at risk of automation. Women and other vulnerable groups will be disproportionately affected, both because the jobs they do can be easily replaced by AI and their unequal access to new economic opportunities. Moreover, AI affects people’s access to education. More people will have access to good quality education that is not limited to jurisdiction and language spoken. In the same vein, those who do not have access to technology are excluded from the learning opportunities AI offers. AI also poses the additional risks of driving misinformation and creating deep fake videos.
  • Companies can take action now: The report recommends that companies mitigate the risks and capitalise on the opportunities of AI through their core business operations, philanthropy and policy engagement. Companies should centre people in their core business operations by taking the following actions: “Identify vulnerable stakeholders in the company’s operations, value chain and communities, identify the most salient human rights and economic risks they face and develop plans to address these through enhanced policies, processes, products, services, technologies, financing mechanisms and business models.” An example is conducting an “AI social impact assessment to identify the potential opportunities and challenges of how AI might impact the workforce across the value chain.” Through philanthropy, companies can also “[e]xplore ways to leverage corporate philanthropy, employee engagement and social investment.” This can include funding AI projects that improves healthcare in vulnerable communities. In addition, companies can “[e]ngage in policy dialogue, awareness raising and institution strengthening partnerships to support those who are most vulnerable.” For instance, companies can “[a]dvocate for policies that support the education, training and digital inclusion needed to support a just transition.”

You may also be interested in

This week’s latest resources, articles and summaries.