Summary

Emerging tech benefits people, but can pose risks to human rights

Anna Triponel

August 24, 2020

In a Strategic Update for the London School of Economics’ IDEAS think tank, Jennifer Easterday, Co-Founder and Executive Director of ethical technology NGO JustPeace Labs, “explores how a human security approach to COVID-19 tech tools would prompt tech companies, governments, and other actors to work with communities in ways that enhance their agency in the face of the pandemic to both reduce the risk of exacerbating conflict while maximizing the benefits of technology.

Below are some of the top findings of the paper, Technology in Conflict: How COVID-19 Contact Tracing Apps Can Exacerbate Violent Conflict, as well as some key takeaways and high-level recommendations for the tech sector:

  • Technology has the potential to improve human rights and accountability and transparency for human rights abuses. However, “it is widely accepted by companies and civil society alike that the promise of technology to support human rights and human security has a dark inverse—it has become a powerful weapon for fomenting violence, conflict, and abuse.” This is a particular risk in geographies “that have a history of conflict or mass human rights abuses. In those contexts, the scope and scale of potential harms are significant.”
  • The COVID-19 pandemic has accelerated the use of new technologies to halt the spread of the disease, including contact-tracing apps that track COVID-19 outbreaks and seek to understand the spread and identify potentially infected people. These apps “use location tracking or proximity tracking to identify when a user has been near someone who has been diagnosed with COVID-19.”
  • The paper points out that the scale, severity and rapid spread of the pandemic also pose harms to people, and that there are “legitimate tradeoffs that governments can make. Some privacy violations might be necessary, as long as they are proportionate and strictly limited to COVID-19 efforts. Moreover, there are positive applications of technology which can deliver effective responses to COVID-19, not only in mitigating the health consequences of the pandemic, but also in protecting livelihoods and generating alternative economic opportunities despite the virus.”
  • However, there is a growing risk of governments using these technologies to violate other human rights, including the right to privacy, freedom of movement and freedom of expression. Recent examples in China, Guatemala, Israel and Ethiopia have demonstrated that governments can use contact tracing to inhibit free movement, to track and surveil dissidents and human rights defenders, and to share personal data with security forces. For example, the Chinese government has developed a “mandatory smartphone app to track the movements of huge numbers of people” which can “impose restrictions on movement, and appears to send personal data to police. This is especially risky in a country that has used facial recognition surveillance to target ethnic minorities” such as the Uighur population.
  • Further, the paper underscores that these technologies can “cause or intensify violent conflict,” whether intentionally or unintentionally: “These tools can easily be weaponized to further repression, surveillance, discrimination, and violence in those areas. Where government intentions are good, poorly designed and implemented COVID-19 tech tools can nonetheless undermine fragile trust in governments and public health authorities.”

Key takeaways for technology companies

  • Technology companies have a responsibility to ensure that their products and services do not contribute to human rights abuses, whether committed by governments or by private actors.
  • There are several ways in which tech companies “are inadvertently contributing to conflict dynamics through product design and release decisions.” “Sometimes technology products are used by third parties in order to foment conflict and abuse. Content moderation on social media platforms can also exacerbate a conflict. So can following government orders to shutdown internet services, or collect and process sensitive data. Sometimes just releasing a product or service in a conflict-affected market can have adverse impacts on the conflict.”
  • The paper also argues that existing regulatory frameworks are “fragmented, reactionary, and ill-equipped to respond in effective and systematic ways,” and therefore are inadequate to protect against the potential human rights impacts of contact tracing technology.
  • In the absence of legislation, the paper recommends that the tech industry form specialized multi-stakeholder initiatives “to promote accountability, but also safeguard and underpin the agency of tech users.” Given the many complexities and the rapid pace of the tech sector, as well as the high potential for unforeseeable human rights impacts to arise, these initiatives should focus on the specific contributions of tech to furthering conflict and human rights abuses.
  • Further, given the unique business structures of tech companies, individual companies should also develop their own “bespoke, carefully crafted policies and practices” to address these human rights impacts.
“Advances in AI technology are being exploited as tactics in asymmetric warfare, and facial recognition is being used to repress and surveil on a mass scale. Contact tracing tools developed to stop the spread of COVID-19 are no exception—they pose significant risks to human security.”                        

Jennifer Easterday, Technology in Conflict: How COVID-19 Contact Tracing Apps can Exacerbate Violent Conflicts, London School of Economics IDEAS Strategic Update (August 2020)

You may also be interested in

This week’s latest resources, articles and summaries.