Summary

Robust assessment for emerging tech's risks to people and planet

Anna Triponel

December 6, 2021
Our key takeaway: Emerging technologies can pose unintended and unforeseen consequences, including negative impacts on people and planet. This is not just a topic for tech companies—all companies developing or using new technologies need to think systematically about potential ethics concerns and work proactively to avoid or mitigate harms.  

Beena Ammanath, Executive Director of the global Deloitte AI Institute and Founder of non-profit, Humans For AI, published an article in the Harvard Business Review outlining the steps companies should take to manage ethics risks when rolling out new technologies:

  • Identify and assess risks at the outset: Ammanath points out that there is a natural inclination for teams to focus attention on “the promise of a technology and the opportunity to create value.” At the same time, “there also has to be attention paid to understanding what can go wrong. It’s vital to pause and brainstorm potential risks, consider negative outcomes, and imagine unintended consequences.” The risk assessment stage should be embedded into the product development or deployment lifecycle to ensure there is a mandate to brainstorm, analyse and document potential risks. This also helps teams see beyond “issues that have already surfaced, such as discriminatory bias in social media marketing or talent acquisition systems” to future-forward considerations of emerging risks that have not yet manifested.
  • Involve specialists with expertise beyond tech: Ammanath also recommends that companies involve specialists outside of the tech sector in product development: “Engineers and software developers do not necessarily have all the expertise they need to understand and address the ethical risks their work might raise. … [T]here could be a role for specialists from other disciplines here. We need to change our priorities to help technology development teams think with more foresight and nuance about these issues — guided by those with the most relevant knowledge,” such as experts in labour and other human rights, mental health, cybersecurity, and even philosophy.”
  • Create accountability mechanisms and governance structures: Ammanath urges companies to ask themselves “early on, who would be accountable if an organization has to answer for the unintended or negative consequences of its new technology. Accountability should be considered when documenting the approach to potential impacts during the strategic planning process.” Accountability systems can take many forms depending on the type of product and ethics risks involved, but “most organizations would likely benefit from placing a single individual in charge of these processes. This is why organizations should consider a chief ethics officer — or a chief technology ethics officer — who would have both the responsibility and the authority to marshal necessary resources.” At the same time, Ammanath cautions that a chief tech ethics officer should not replace the insights that experts from other types of disciplines (mentioned above) can bring to inform assessment and mitigation of potential risks and impacts.

For more, see Beena Ammanath, Thinking Through the Ethics of New Tech…Before There’s a Problem, Harvard Business Review (November 2021)

You may also be interested in

This week’s latest resources, articles and summaries.