The AI threats to climate action and human rights

Anna Triponel

March 15, 2024
Our key takeaway: Artificial intelligence (AI) systems pose significant threats to climate action and human rights in several ways, says Climate Action Against Disinformation, Check My Ads, Friends of the Earth, Global Action Plan, Greenpeace, and Kairos. First, data centres that power AI systems use vast amounts of energy and water, with many situated in already water-stressed areas. The International Energy Agency estimates that energy use from data centres powering AI will increase from 1% to 13% of global electricity demand in the next 2 to 10 years. In the U.S. alone, the U.S. Department of Energy found that data centres consumed 1.7 billion liters of water per day in 2014 to cool on-site computing systems. Second, AI-generated mis-and disinformation has spread climate denial on social media platforms, stalling progress on climate action as the topic becomes more polarising and divisive. All of these impacts adversely affect people and their human rights. For instance, the environmental impacts of developing and using AI systems tend to be concentrated in fossil fuel regions, where a disproportionate number of marginalised communities live. At least one-fifth of data centres operate in areas with moderately to highly water-stressed watersheds, which can exacerbate existing vulnerabilities. AI systems themselves can be discriminatory, with marginalised communities like women of colour directly impacted by facial recognition discrimination. The report calls for safety, transparency and accountability from AI companies and their products, and for integrating climate and environmental justice into AI policy - which includes incorporating input from frontline communities. If this does not happen, “AI will only exacerbate environmental injustice.”

Climate Action Against Disinformation, Check My Ads, Friends of the Earth, Global Action Plan, Greenpeace, and Kairos published The AI threats to climate change (March 2024):

  • AI exacerbates climate change, environmental impacts and threatens human rights: The report highlights “two significant and immediate dangers” posed by artificial intelligence (AI). These are: 1) the vast amount of energy and water used in powering AI systems and 2) “the threat of AI turbocharging disinformation” on climate change. In relation to energy and water use, the International Energy Agency predicts that energy use from AI data centres will double in the next two years and consume as much energy as Japan. In the next two to ten years, energy use from data centres may go from 1% to 13% of global electricity demand. In addition, vast amounts of water is needed to cool computing systems that power AI and to also train large language models like Chat GPT-3. The U.S. Department of Energy found that U.S. data centres “consumed 1.7 billion liters per day in 2014, or 0.14% of daily U.S. water use.” At least one-fifth of data centres operated in areas with moderately to highly water-stressed watersheds, which can exacerbate existing vulnerabilities: “This thirsty industry therefore contributes to local water scarcity in areas that are already vulnerable, and could exacerbate risk and intensity of water stress and drought.” This will have disproportionate impacts on marginalised communities: “Marginalized communities continue to bear the brunt of climate change and fossil fuel production, and studies are already finding that AI’s carbon footprint and local resource use tend to be heavier in regions reliant on fossil fuel.”
  • AI spreads climate denial and threatens human rights: In relation to disinformation, AI is already making strides in creating and spreading disinformation: “Fossil fuel companies and their paid networks have spread climate denial for decades through politicians, paid influencers and radical extremists who amplify these messages online.” And this trend is exponentially increasing: “In 2022, this climate disinformation tripled on platforms like X.” This will stall action on climate change as the spread of climate disinformation exacerbates polarisation and political division on this issue. International organisations like the World Economic Forum (WEF) have recognised the dangers that AI poses to spreading false information; in 2024, the WEF identified AI-generated disinformation as the world’s great threat, followed by climate change. Moreover, marginalised communities and their human rights are at particular risk from AI systems: “researchers and technologists—especially women of color—have been calling attention to the discriminatory harms AI is already causing today. This includes direct attacks like facial recognition discrimination.”
  • What can companies do? The report recommends that AI systems be developed in accordance with three core principles: transparency, safety and accountability. In particular, companies must 1) “demonstrate that their products are safe for people and the environment, show how that determination is made, and explain how their algorithms are safeguarded against discrimination, bias and disinformation”; 2) “enforce their community guidelines, disinformation and monetization policies”; and 3) collaborate with governments, academia and civil society “to determine how to create “green AI” systems that reduce overall emissions and climate disinformation.” Specifically in relation to technology companies, the report recommends that they “commit to strong labor policies including: fair pay, clear contracts, sensible management, sustainable working conditions and union representation.” This ensures that technology companies are tackling issues of poor working conditions faced by content moderators. These companies should also implement energy usage and CO2 levels as key metrics in evaluating the carbon footprint of AI systems.

You may also be interested in

This week’s latest resources, articles and summaries.