Editorial: Beyond the hype on AI – early signs of divides in the labour market
The release of ChatGPT on 30 November 2022 was a revelation in the advance of artificial intelligence (AI). Developed with large language models, the easy-to-use tool demonstrated a remarkable ability to automatically perform a wide range of prompted tasks, from writing to graphics to computer programming. ChatGPT is just one of many such tools now open to the public, and part of a continuum of AI development that dates back decades.
Yet the past seven months mark a technological watershed, triggering an unprecedented awareness of AI’s potential to change our lives and economies, leading some to even question the meaning and purpose of our lives. Inevitably such a breakthrough technology has sparked a mix of wonder and worry among users and experts alike. A recent open letter by prominent technologists called for an immediate pause on giant AI experiments like ChatGPT, citing “profound risks to society and humanity”. Meanwhile, private investment continues to multiply as AI is seen as general-purpose technology like electricity, the internal combustion engine and the internet.
There is unease over the speed at which AI is developing, far faster than previous technologies, while the implications for the economy and society remain uncertain. Also, and unlike robots for instance, whose risks were most concentrated in certain sectors, AI may have the capacity to affect all industries and occupations. With so much at stake, it is crucial for policy makers to seek clarity on the coming impact of AI and take action.
To this end, the OECD has launched its first-ever study of one area where we know that AI will have a significant impact: the labour market, by focusing on the manufacturing and financial sectors that have been integrating AI into work processes for several years. This research is the first cross-country empirical look into the effects that AI is having on the labour market; these early findings offer important clues of what will become more widespread with the vast new awareness and capabilities spawned in the post-ChatGPT context.
This study was part of our wider research programme on AI in Work, Innovation, Productivity and Skills (AI-WIPS), which offers some of the first robust evidence to a debate that is still often based on anecdotes.
In 2022, we surveyed over 2 000 employers and 5 300 workers in the manufacturing and finance sectors of seven OECD countries. We also spoke directly to stakeholders in these sectors and asked them about their experience as early adopters of tools like computer vision and natural language processing, amongst others.
What emerged was a nuanced picture of the early impact of AI which – even before the more recent wave of generative AI – showed strong opinions and sharp divisions on the benefits and risks.
Despite the renewed worries about a jobless future, the impact of AI on job levels has been limited so far. We are at a very early stage of AI adoption that is generally concentrated in the largest firms that are often still experimenting with these new technologies. Among these early adopters, many appear reluctant to dismiss staff, preferring to adjust the workforce through slowed hiring, voluntary quits and retirement. Some companies even told us that, in the face of an ageing population and labour shortages, AI could help relieve some skills needs.
However, it is also clear that the potential for substitution remains significant, raising fears of decreasing wages and job losses. Taking the effect of AI into account, occupations at highest risk of automation account for about 27% of employment. And a significant share of workers (three in five) is worried about losing their jobs entirely to AI in the next ten years, particularly those who actually work with AI. The advent of the latest generative AI technologies is sure to heighten such concerns across a wide range of job categories.
Although AI is not currently tied to any major changes in wages, positive or negative, across the labour market, the OECD surveys showed that two in five workers in manufacturing and finance expressed worries that wages in their sector would decrease due to AI adoption in the next 10 years.
For the time being, more than replacing jobs, AI is changing them and the skills that are required to carry them out. According to employers, AI has increased the importance of specialised AI skills but it has increased the importance of human skills even more. Two out of five employers consider that the lack of adequate AI-related skills is a barrier to using AI at work.
An equally interesting finding from our research is that, despite some widespread anxiety about the future, many say AI is having a positive impact on job quality. Nearly two-thirds (63%) of workers reported that AI had improved their enjoyment at work: by automating dangerous or tedious tasks, AI is allowing them to focus on more complex and interesting ones. In one of the case studies we conducted, an aerospace firm had introduced an AI-led visual inspection tool to check newly manufactured turbine blades for jet engines. AI technology had a positive impact on the work environment of inspectors who, prior to the introduction of AI, would sit in a darkened room for long periods inspecting blades through a magnifying eyepiece.
Despite the generally positive feedback on AI’s impact on job quality, our study also found some tangible concerns: for example, creating work intensification. And workers who are managed by AI are often less positive about the impact of AI than those who work alongside it. The use of AI also comes with serious ethical challenges around data protection and privacy, transparency and explainability, bias and discrimination, automatic decision making and accountability. There are many real-world examples of AI hiring tools that have baked in human biases against women, people with disabilities, and ethnic or racial minorities. In our survey, many workers expressed concerns about AI collecting data on them as individuals or how they do their work.
How AI will ultimately impact workers and the workplace, and whether the benefits will outweigh the risks, will also depend on the policy action that we take. The advance of AI in the workplace, in itself, should not be halted because there are many benefits to be reaped. Yet we should also avoid falling into the trap of “technological determinism”, where technology shapes social and cultural changes, rather than the other way around. To paraphrase labour economist David Autor, instead of asking what AI can do, we must ask what we want it to do for us.
Urgent action is required to make sure AI is used responsibly and in a trustworthy way in the workplace.
On the one hand, there is a need to enable workers and employers in reaping the benefits of AI while adapting to it, notably through training and social dialogue.
Countries have taken some action to prepare their workforce for AI-induced job changes, but initiatives remain limited to date. Some countries have invested in expanding formal education programs (e.g. Ireland), or launched initiatives to raise the level of AI skills in the population through vocational training and lifelong learning (e.g. Germany, Finland and Spain). The OECD’s research also shows that outcomes are better where workers have been trained to interact with AI, and where the adoption of technologies is discussed with them.
On the other hand, there is an urgent need for policy action to address the risks that AI can pose when used in the workplace – in terms of privacy, safety, fairness and labour rights – and to ensure accountability, transparency and explainability for employment-related decisions supported by AI.
Governments, international organisations and regulators must provide a framework for how to work with AI. This includes setting standards, enforcing appropriate regulations or guidelines, and promoting proper oversight of these new technologies. The OECD has played a pioneering role in this area by developing the OECD AI Principles for responsible stewardship of trustworthy AI, adopted in May 2019 by OECD member countries – forming the basis also for the G20 AI Principles – and since then also by Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore and Ukraine.
Many countries already have regulations relevant to enforce some of the key principles of trustworthy use of AI in the workplace. Existing legislation, including on data protection, includes provisions relevant to AI. However, a major development in recent years has been the proposal of specific-AI regulatory frameworks that address AI high-risk systems or impacts, albeit with key differences in approach across countries.
Anti-discrimination legislation, occupational safety and health regulation, worker privacy regulation, freedom of association all need to be respected when AI systems are used in the workplace. For instance, all OECD member countries have in place laws that aim to protect data and privacy. Examples include the requirement to prior agreement with workers’ representatives on the monitoring of workers using digital technologies (e.g. Germany, France and Italy), and regulations requiring employers to notify employees about electronic employee monitoring policies. In some countries, such as Italy, existing anti-discrimination legislation has been successfully applied in court cases related to AI use in the workplace. But regulations that were not designed specifically for AI will, in all likelihood, need to be adapted.
Using AI to support decisions that affect workers’ opportunities and rights should also come with accessible and understandable information and clearly defined responsibilities. The ambition to achieve accountability, transparency and explainability is prompting AI-specific policy action with direct implications for uses in the workplace.
A notable example is the proposed EU AI Act, which takes a risk-based approach to ensure that AI systems are overseen by people, are safe, transparent, traceable and non-discriminatory. In the United States in October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, which laid out a roadmap for the responsible use of AI. In June 2022, Canada introduced in Parliament the Artificial Intelligence and Data Act (AIDA) which requires “plain language explanations” of how AI systems reach their outcomes. Many countries, organisations and businesses are also developing frameworks, guidelines, technical standards, and codes of conduct for trustworthy AI.
When it comes to using AI to make decisions that affect workers’ opportunities and rights, there are some avenues that policy makers are already considering: adapting workplace legislation to the use of AI; encouraging the use of robust auditing and certification tools; using a human-in-the-loop approach; developing mechanisms to explain in understandable ways the logic behind AI-powered decisions.
A general concern by many experts is that the pace of the policy response is not keeping up with the very rapid developments in generative AI and that the policy response still lacks specificity and enforceability.
Indeed, there have been many calls to act on generative AI. The European Union announced plans to introduce a voluntary code of conduct on AI to be adopted rapidly. The US-EU Joint Statement of the Trade and Technology Council in May 2023 decided to add special emphasis on generative AI to the work on the Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, and the UK Prime Minister announced a summit on AI safety to be held late 2023. AI-related regulation also raises new challenges in relation to their international interoperability, which calls for international action to promote alignment of key definitions and of their technical implementation where appropriate.
Many of these calls are addressed by the “Hiroshima AI Process” launched by G7 Leaders in May 2023 with the objective of aligning countries (including the EU) to an agreed approach to generative AI. The OECD has been convoked to support this process that is underway.
Such action will need to be quickly complemented by concrete, actionable and enforceable implementation plans to ensure AI is trustworthy. International co-operation on these issues will be critical to ensure a common approach that will avoid a fragmentation of efforts that would unnecessarily harm innovation and create a regulatory gap that might lead to a race to the bottom.