The European Parliament’s committee exploring AI needs to give the floor to civil society. Big Tech has had enough influence.
When the European Parliament established the Special Committee on Artificial Intelligence in a Digital Age (AIDA), many in the labour movement thought that, finally, the voices of workers, consumers and citizens would be heard.
Indeed, the task given to the AIDA committee was to analyse the impact of artificial intelligence on the European economy, ‘in particular on skills, employment, fin[ancial] tech[nology], education, health, transport, tourism, agriculture, environment, defence, industry, energy and e-government’. The mandate also included investigating the challenge of deploying AI and its contribution to economic growth, analysing the approach of third countries and, finally, working on a roadmap for ‘a Europe fit for the digital age’—a strategic plan defining common objectives and the steps to reach them.
That was in June 2020. The European Commission had taken the lead on AI and data, with a stream of legislative proposals: the European Digital Strategy, a European Strategy for Data, the White Paper on AI, followed after the summer by the Digital Services Act, the Digital Market Act and the Data Governance Act.
Very little in these proposals was however about the impact of AI on workers. And we now know that the commission was the subject of intense lobbying by tech companies, aiming at resetting the narrative and influencing the scope and direction of the proposals.
In the first half of 2020, Google, Facebook, Amazon, Apple and Microsoft declared spending a combined €19 million, equal to what they had declared for all of 2019 and up from €6.8 million in 2014. The spending helped deliver access: the companies and their allies reported hundreds of meetings with officials at the commission and the parliament.
There was hope that the AIDA committee would somehow rebalance that equation, by listening to other voices and ensuring that the European Union’s AI and data policy would take citizens’ concerns fully into consideration. Unfortunately, there are reasons to be doubtful and some fear the committee will not be able to add any value to the commission’s proposals—an opinion confirmed by several members of the committee during recent conversations.
There is however still time to turn things around, if concerns are dealt with quickly.
First, the committee is composed of 33 MEPs and has a temporary mandate of 12 months (as with all special committees). Given the importance and complexity of the issues it has been tasked to examine, one can question its ability to address them in such a short time. Extending the mandate would be a step in the right direction.
To date, the committee has held two public hearings, one on women and digitalisation, the other on AI and health. Apart from commission officials, ten speakers participated, representing: Microsoft Western Europe, Carnegie Mellon University, European Digital Rights (EDRi), the Greek Ministry of Digital Governance, the European Centre for Disease Prevention and Control, LUMSA University in Rome, Freie Universität Berlin, the Halland Hospital Group in Sweden, Exscientia (a UK company which uses AI for drug discovery) and the European Consumer Organisation (BEUC).
With the exception of BEUC and EDRi, the committee has shown limited interest in hearing what civil society has to say on the 13 topics within its remit. Truly engaging with civil society means listening to many voices and reflecting a diversity of concerns.
Given the lobbying power of Big Tech, a democratic and anticipatory governance of AI and data generally is achievable only if there is systematic engagement by all social actors. Workers’ representatives, as others, should fully participate in the discussions and their inputs should be meaningfully embedded.
Expecting the AIDA committee to analyse 13 complex topics and produce a roadmap for ‘a Europe for the digital age’ is not realistic. In addition to extending the term of office, the committee thus needs to reduce the scope of its work, by focusing on some of the 13, preferably those with a social dimension.
Transversal issues—such as algorithmic management, workplace surveillance, discrimination at work and fundamental rights—do deserve to be discussed. The number and frequency of meetings should also increase. Sub-committees can be tasked to deal with the most technical issues.
As an emanation of the only elected European institution, the European Parliament, the committee should focus its attention on citizens’ concerns and their rights, and primarily invite civil-society organisations to its meetings and hearings. The interests of Big Tech and the private sector are sufficiently well represented and defended.
European and sectoral trade unions have been proactive, made legal contributions and raised fundamental questions related to AI’s implications for workers and their rights. These are not yet reflected in current strategies or regulatory instruments.
Among the key issues the trade union movement would like the committee to address are: AI risks, their categorisation, identification and mitigation; clarification of the liability regime when AI applications are deployed at the workplace; algorithmic management; and improving bargaining power related to technology and data issues. This includes the need to rethink workplace surveillance, for example by requesting the EU Data Protection Board to issue guidance on the implementation of the GDPR rules in the contractual relationship of employment.
Most of the current AI and data-regulation discussion revolves around bias—how to avoid it technically, improve the quality of the data and deal with unfairness and discrimination. Yet bias is already in the system when trade unions and the wider society are not actively engaged.
If action is taken now to address its shortcomings, the AIDA committee can help partially to bridge that gap and inject a much-needed shot of democracy into the debate.
(This article was originally published on Social Europe)
Photo credits metamorworks