Summary

Artificial Intelligence (AI) and Worker Well-Being

Anna Triponel

October 25, 2024

The U.S. Department of Labor (DoL) published Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers (October 2024), which outlines principles and best practices for AI developers and employers to centre the well-being of workers in the development and deployment of artificial intelligence (AI) in the workplace. These principles and best practices follow President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. They are not intended to be an exhaustive list but instead a guiding framework for businesses.

Human Level’s Take
  • AI and workers. So many benefits. Also, so many pitfalls.
  • AI can automate repetitive tasks and assist with routine decisions, reduce workloads so workers can focus on other responsibilities - great!
  • But AI can also lead to workers losing autonomy and control over their tasks. AI can undermine workers' rights, introduce bias into decision-making processes and make significant workplace decisions without transparency or human oversight. AI can displace workers from their jobs entirely. Really not great.
  • So what are companies to do? Cue the eight principles and examples of best practices outlined by the U.S. Department of Labor, which are intended to be a guiding framework for companies.
  • Companies can inform and obtain genuine input from workers and their representatives, especially those from underserved communities, of the design, development, testing, training, use and oversight of AI systems in the workplace. They can establish governance structures, accountable to leadership, to guide and coordinate the use of AI across business functions, incorporating input from workers into decision-making processes and reviewing the impacts of AI against their intended uses and potential risks, including impacts on human rights. Companies can provide training opportunities to workers to learn how to use AI systems in their work and encourage workers to raise concerns about the use and impact of AI, without the fear of retaliation.
  • The extent to which AI positively contributes to humanity depends on whether worker welfare and rights are respected and prioritised in this new era of technological transformation. And businesses have a key role to play as gatekeepers.
“We should imagine a world where the human creativity that fuels AI is applied to make life better for working people and where AI is deployed in the workplace to improve the quality of jobs so that they are safer, more fulfilling, and more rewarding … AI’s promise of a better world cannot be fulfilled without making it a better world for workers.”
Julie A. Su, Acting Secretary, United States Department of Labor

For Further Reading

  • AI can impact workers both positively and negatively: The report highlights how AI can positively and negatively impact workers in the workplace in a myriad of ways. On the positive side, AI can automate repetitive tasks and assist with routine decisions, reducing workloads and allowing workers to focus on other responsibilities. This shift will increase the demand for workers to gain new skills and training to effectively integrate AI into their daily tasks. AI will also generate new job opportunities in areas like AI development, deployment and oversight. However, AI presents risks as well. Workers may lose autonomy and control over their tasks, and job quality could decline. The risks become more pronounced if AI undermines workers' rights, introduces bias into decision-making processes or makes significant workplace decisions without transparency or human oversight. Additionally, AI could displace some workers from their jobs entirely.
  • Eight principles for deploying AI in the workplace: The report outlines the principles that developers and employers should consider when developing or deploying AI in the workplace. These are:
    • Centring Worker Empowerment (the North Star): Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use and oversight of AI systems for use in the workplace.
    • Ethically Developing AI: AI systems should be designed, developed and trained in a way that protects workers.
    • Establishing AI Governance and Human Oversight: Organisations should have clear governance systems, procedures, human oversight and evaluation processes for AI systems for use in the workplace.
    • Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace.
    • Protecting Labour and Employment Rights: AI systems should not violate or undermine workers’ right to organise, health and safety rights, fair wages and working hours, and anti-discrimination and anti-retaliation protections.
    • Using AI to Enable Workers: AI systems should assist, complement and enable workers, while also improving job quality.
    • Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI.
    • Ensuring Responsible Use of Worker Data: Workers’ data collected, used or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly
  • What are businesses to do: For each principle, the report provides best practice examples for organisations to consider when developing or deploying AI in a worker-centric way. Some examples include:
    • Considering how AI systems would impact specific jobs, skills needed, job opportunities and risks for workers prior to procuring AI technologies. This includes engaging workers and their representatives to determine how AI can support worker productivity, performance and well-being.
    • Providing workers with appropriate training opportunities to learn how to use AI systems.
    • Seeking opportunities to work with state and local workforce systems to support education and training partnerships for upskilling.
    • Establishing governance structures, accountable to leadership, to produce guidance and provide coordination to ensure consistency across business functions when implementing AI systems. These governance structures should incorporate input from workers and their representatives into decision-making processes, as well as review the impacts of the AI systems against their intended uses and potential risks, including impacts on human rights.
    • Providing advance notice and appropriate disclosure to workers and their representatives if the organisation intends to use worker-impacting AI, including what data will be collected and for what purpose the data will be used.
    • Routinely monitor and analyse whether the use of the AI system is disproportionately impacting individuals with protected characteristics (e.g., race, colour, national origin, religion, sex, disability, age, genetic information) and take steps to reduce the impact or use a different tool if this is the case.
    • Encouraging workers to raise concerns about the use and impact of AI and ensuring that they are not retaliated against for doing so.

You may also be interested in

This week’s latest resources, articles and summaries.
No items found.