On 18 April 2023, the Trade Union Congress (TUC) warned that the UK government is failing to protect workers from being ‘exploited’ by new AI technologies. The warning came as politicians, tech leaders, regulators and unions met for the TUC AI conference in London.

The press release stresses AI-powered technologies are now making high-risk and life changing decisions, including those related to hiring, managing and firing staff. ‘AI is being used to analyse facial expressions, tone of voice and accents to assess candidates’ suitability for roles’. Moreover, many workers are being kept in the dark about how AI is being used to make decisions that directly affect them. A survey conducted by the TUC last year shows that only 6% of workers had been asked for consent before the use of AI-powered recruitment and management technologies, while only 5% said that they would trust such technologies. Another survey conducted by BritainThinks and commissioned by TUC in 2020 delved into specific areas in which AI is used and highlighted strong worker support for more consultation. For instance, 75% of respondents agreed that employers should be legally required to consult and agree with workers on any new form of monitoring they are planning to introduce.

Left unchecked, the TUC warns, AI could lead to widespread discrimination as well as work intensification and unfair treatment. But this is not the first time the TUC has sounded the alarm bell. The union has already repeatedly stressed the need for stronger regulation to protect workers from AI risks. ‘The EU is currently putting in place laws dealing specifically with the use of AI, whereas the UK does not have anything like this’, reads another press release published last year. Yet ‘another example of the UK falling behind its EU counterparts on workers’ rights’.

In March 2023, the UK government came up with an AI white paper describing a ‘proportionate, future-proof and pro-innovation framework’ for AI regulation. The proposal was met with condemnation by trade unions, who called it a ‘dismal failure’ providing only vague guidance to regulators and no additional capacity or resources to cope with rising demand. ‘Ministers are refusing to introduce the necessary guardrails to safeguard workers’ rights,’ said the union body.

In the US, the Biden administration recently announced a set of actions to tackle safety and privacy issues. ‘AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks,’ said the White House in a statement. This new effort builds on previous attempts by the Biden administration to promote some form of responsible innovation. In October 2022, the administration unveiled a blueprint for an ‘AI Bill of Rights’ as well as an AI Risk Management Framework. More recently, it has pushed for a roadmap on establishing a National AI Research Resource.

However, these measures don’t have any legal teeth and, to date, the Congress has not advanced any laws that would rein in AI. Reacting to these initiatives, vice president and distinguished analyst at Gartner Research Avivah Litan said: ‘We need meaningful regulations such as we see being developed in the EU with the AI Act. While they are not getting it all perfect at once, at least they are moving forward and are willing to iterate. US regulators need to step up their game and pace’.

In the EU, lawmakers have been finalising the text of the ‘AI Act’ ahead of the vote in the leading parliamentary committees on 11 May 2023. The AI Act is a landmark legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. The original proposal classified AI systems into four categories: prohibited, high risk, low risk and minimal risk. However, the recent success of ChatGPT and other generative models prompted the inclusion of a fifth category: general-purpose AI systems – with a stricter regime including a mandatory summary of training data and higher fines in case of breaching the rules.

Although the AI Act is often portrayed as a good example in the US and UK, it has nonetheless been heavily criticised amongst labour law academics for not taking into consideration workers’ rights. Similarly, the European Trade Union Confederation deems the AI Act not suitable for regulating the use of AI in work settings. ‘An EU directive on algorithmic systems in the workplace, based on Article 153 TFEU, should define European minimum standards for the design and use of algorithmic systems in the employment context’, said the ETUC Executive Committee in a resolution adopted on 6 December 2022.