There are three paths ahead for us (according to a huge meta-analysis delving into over 70 authoritative sources– courtesy of the Congruence Foundation, see link here).
1️⃣ The positive scenario (we can call it the‘dream scenario’) ☀️
In this scenario:
New industries grow in areas like care, health, and green jobs
Sounds good, doesn’t it!
2️⃣ The neutral scenario (we can call it the ‘OK but not great scenario’) ☁️
In this scenario
Hum, OK, not great. ⚠️
3️⃣ The negative scenario(we can call it the ‘get me out of here scenario’) ⚡️
In this scenario
😬 Please meta-analysis, tell me that this will not happen?
Things don’t look good.
📝 According to the studies:
This is quite chilling. 🥶
If we do not have major policy intervention in the next two years, we are looking at the neutral scenario for sure, or possibly the negative scenario.
Companies have a significant role to play here.
Some of it is connected to what they need to do under the soft law – and growing hard law – expectations of them.
Increasingly though, one can credibly argue that this entails shaping the systems within which AI functions, to ensure responsible AI use can happen at scale. This enabling environment in turn shapes the severity and likelihood of impacts to come.
Companies that help define and shape what decent work looks like in the age of AI will both meet their responsibility to respect, while ensuring that they can operate as a business in the years to come.
Because responsible AI at work has become a long-term business continuity issue.
If businesses pursue isolated efficiency gains without seeking to strengthen the systems around AI, they risk helping build the very economic and social conditions that will undermine:
Long story short: you now also have a business reason for putting strong individual and collective action on responsible AI in motion within your company.
So, what conversation can you initiate today to bring human rights into how your company is using AI? 🤖