Cambridge University Press’s Business and Human Rights Journal published On the Right to Work in the Age of Artificial Intelligence: Ethical Safeguards in Algorithmic Human Resource Management (January 2025). The article explores how algorithmic human resource management (AHRM), which is the automation or augmentation of human resources-related decision-making with the use of Artificial Intelligence (AI)-enabled algorithms, can lead to discriminatory results and systematic disadvantages for marginalised groups. The report also provides recommendations on ethical safeguards to protect fundamental human rights when using AI in HR-related decisions and activities.
Human Level’s Take:
- Artificial intelligence (AI) is transforming how organisations manage human resources (HR), with over 250 tools available to streamline processes.
- So, how are companies using AI in HR? AI is increasingly used in the recruitment and selection of new employees, particularly in the sourcing, screening of resumes and candidate matching phases.
- While AI offers benefits in aiding decision-making, it also presents significant risks to workers' rights. The use of AI in HR decision-making can pose significant risks to the right to equality, equity and non-discrimination; the right to privacy; and the right to work. For instance, AI systems can perpetuate and amplify biases from training data, reinforcing discrimination. The use of personal data to assess job applicants' suitability may threaten their right to privacy by exposing sensitive information to potential unauthorised access, misuse, or discrimination in recruitment processes. Additionally, companies may utilise public data, such as social media posts or purchasing history, without explicit consent. Algorithmic HR systems can also make recruitment decisions opaque, leaving workers unable to comprehend or contest them.
- So what can companies do? Companies can: 1) debiase AI systems through conducting regular data quality checks, internal audits, and impact assessments to prevent discrimination. They can also remove data points reflecting past biases or those predictive of protected characteristics; 2) comply with regulations such as the EU AI Act and the EU General Data Protection Regulation (GDPR) in a way that treats these legal frameworks as meaningful tools to identify and address risks to data subjects rather than a tick-box exercise; 3) respect and advocate for workers’ right to freedom of association and enabling their right to collective bargaining to ensure their voices are heard in decisions around AI systems and their governance; and 4) clearly communicate how data is used in algorithmic management to build trust and fairness.
Some key takeaways:
- The disadvantages of using AI in the human resources (HR) sector: There are over 250 AI tools for the HR sector on the market, coupled with a growing push to adopt these tools in the employment sector. These tools are typically used in the recruitment and selection of new employees, particularly in the sourcing, screening of resumes and candidate matching phases. While there are benefits to adopting AI in HR, including automating routine tasks and saving costs through increased recruitment efficiency and enhanced productivity in HR development processes, there are many adverse impacts on people to consider. For instance, algorithm-based decision-making can perpetuate existing discriminatory biases and disadvantages for marginalised groups. For example, Amazon’s AI recruiting tool was trained on biased historical data on white male chief executive officers (CEOs) and software engineers and, as such, recommended top positions to males over females.
- The use of AI in human resources management (HRM) strategies and functions can pose a significant and pervasive risk to workers’ rights: The report highlights how the use of AI in HRM work can impact workers’ rights, namely: (1) the right to equality, equity and non-discrimination; (2) the right to privacy; and (3) the right to work. With regards to the right to equality, equity and non-discrimination, AI-based technologies can lead to discrimination in different ways. For instance, systems can be trained on biased data, reflecting and amplifying existing biases already entrenched in hiring practices at scale. Even if organisations choose to omit protected characteristics (including race, gender, ethnicity or religion) in datasets, discrimination can still occur through proxy discrimination. Proxy discrimination happens when a seemingly neutral piece of data is correlated with and acts as a proxy for a protected characteristic. For example, data on the distance between a candidate’s home and the office to predict employment tenures is a seemingly neutral piece of data. However, due to patterns of residential segregation, the zip code can be used by the system as a proxy for race. With regards to the right to privacy, personal data (where an individual can be identified by reference to an identifier such as a name, an identification number, a location data, but also to other factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of the person) may be scanned and be used to infer the suitability for a job of a given applicant. This may threaten an individual’s right to privacy because privacy in recruitment processes involves the protection of job applicants’ personal data and their informational identity from unauthorised access, misuse or discrimination. Applicants may also feel compelled to give consent for companies to process their personal data for fear of jeopardising their chances of getting hired. In addition, companies may create data profiles from ‘public’ data from social media profiles and other online traces, such as purchasing history or location data, without applicants consenting to them. With regards to the right to work, algorithmic human resource management (AHRM) can jeopardise workers’ ability to make sense of the data collected and used to recruit them. It can also perpetuate workplace inequalities when models rely on biased data and/or discriminatory proxies to make decisions.
- So what can companies do?: The report delves into several recommendations. Debiasing, which aims to make the outputs of a system fair or non-discriminative, is a potential solution to design and implement systems that are consistent with workers’ right to equality, equity and non-discrimination. For companies, this means conducting frequent data quality checks and internal audits and impact assessments to identify, monitor and prevent discriminatory risks. It also means removing data points that reflect past biases or may be predictive of protected characteristics. The report also refers to legislation developed to address AI-related risks and impacts. For instance, the EU’s new AI Act (which came into force in August 2024 and applies to providers, deployers, importers and distributors of AI systems) follows a risk-based approach and aims to ensure that AI systems placed on or used in the EU market respect fundamental rights. It classifies AI systems used in the recruitment of applicants as ‘high-risk’, triggering additional protection requirements. Similarly, the EU’s General Data Protection Regulation (GDPR) requires companies to conduct data protection impact assessments for high-risk systems, like those used in recruitment. These assessments identify the human rights risks of data subjects and ways to address them. However, the article states that these legal frameworks should not be used as a “mere tick-box exercise” by companies. The report also emphasises the importance of ensuring workers’ freedom of association and worker voice as a way to protect their interests and rights: “collective bargaining and trade union actions can offer an effective way to start negotiations about workers’ interests and a whole range of other conditions in algorithmic management, like transparency requirements from companies about workers’ data use, storage and management.”