Strengthening the S in ESG through better metrics and indicators

Anna Triponel

May 31, 2024
Our key takeaway: Measurable indicators and metrics are the linchpin to companies monitoring their progress on certain topics. More often than not, these metrics fail to meaningfully measure companies’ progress and efforts on addressing their impacts on people i.e., the S in ESG. Why? Shift talks about indicators that inadvertently incentivise poor corporate behaviour and those that reduces complex topics into oversimplified metrics, which encourages unjustified conclusions (e.g., an indicator that states the company follows the UN Guiding Principles). Social audits indicators (e.g., the percentage of suppliers who have been audited) seldom show whether a company is taking meaningful steps to address their human rights risks and impacts. The time is ripe for companies and data providers to look at how effective their social indicators are at measuring companies’ progress on addressing their adverse human rights impacts, especially because we now have laws (e.g., the EU Corporate Reporting Sustainability Directive) that require it.

Shift published Part 1 and Part 2 of its Strengthening the S in ESG series (May 2024). The series focuses on developing stronger social indicators and metrics to measure the S in ESG:

  • Avoid indicators that create perverse behavioural consequences: To strengthen the S in ESG reporting and metrics, Part 1 of the series recommends that data providers and standard-setters “avoid indicators that create perverse behavioral consequences.” For instance, company A sets sales targets, which incentivises staff to engage in unethical sales practices to meet these targets and, in turn, could undermine profitability. Based on its research, Shift concluded three findings: 1) “[t]here is a prevalence of indicators focused on the number of complaints, grievances or incidents as recorded by a company, potentially incentivizing issues to be hidden or under-reported.” Examples include indicators that focus on supply chain audit non-compliances; 2) “[a] substantial number of indicators reward practices that have, at best, been shown to lead to limited positive results.” Examples include the number of people who have received diversity, equity and inclusion (DEI) training or the percentage of suppliers audited. These types of indicators divert “attention and resources from a company’s own purchasing practices or other root-cause issues of human rights harms in global value chains”; and 3) “[t]here is an over-use of indicators that reflect outdated understandings of corporate responsibility, so orienting companies away from identifying and addressing the most significant risks to people within their operations and value chains.” An example is the money spent by the company on community-building activities. While these programmes can deliver positive social impacts, they fail to show that companies are addressing their adverse impacts on people.
  • Avoid indicators that encourage unjustified conclusions: Part 2 of the series recommends that data providers and standard-setters “avoid indicators that encourage unjustified conclusions.” Based on its research, Shift concluded three findings: 1) “[m]ultiple data providers are using composite indicators that make outsized claims about companies’ social performance.” An example is the company follows the UN Guiding Principles on Business and Human Rights. This is unrealistic in practice because “the expectations in these international standards are extensive and complex, while the underlying data points used by an analyst to reach an opinion to score a company tend to be vastly simplified”; 2) “[t]oo many indicators measure phenomena with a weak causal link to practices being implied.” This includes indicators that focus on the existence of a policy or training programme, without offering any insight into the quality of its content or implementation. An example is the company has a labor-related policy/code of conduct for its own workforce on anti-discrimination; and 3) “[d]ata providers use indicators that are impossible to interpret without context.” For instance, indicators focused on media reports or allegations of companies’ involvement in adverse human rights impacts or those measuring legally-required performance on certain issues.
  • Recommendations: While the reports are geared towards data providers and standard-setters, companies can use the following recommendations to tailor their own metrics and KPIs to ensure that they are effectively monitoring their efforts to address their adverse impacts on people. Recommendations include following the three design principles: 1) “[i]ndicators should focus on strong predictors of business decision-making and behaviour”; 2) “[i]ndicators should focus on the quality of due diligence processes”; and 3) “Indicators should offer insight into outcomes for people that the company can reasonably be judged to have contributed to.”

You may also be interested in

This week’s latest resources, articles and summaries.