The Directive has stood the test of time and in general is fit for purpose: the approach of setting objectives to be achieved in the legislative text – leaving standardisation to adapt and flesh out the details of how they can be achieved – has worked well. In its current form, the Directive makes a significant contribution to the safety of workers as well as to revenue contributions for EU manufacturers and it is questionable whether changes to the Essential Health and Safety Requirements (EHSR) in Annex I should be made unless considered indispensable by both sides of industry and by the public authorities. As the REFIT programme calls for modifying legislation only on the basis of careful evaluation rooted in facts and substantial evidence, it seems indispensable to collect and assess facts and concrete case studies before proposing changes, making optimum use of the knowledge and experience available from a wide range of stakeholders.

Understandably, there are experts convinced that the Machinery Directive needs updating because of the challenges arising from progress in digital technologies: their concerns are legitimate and we all should work to acquire and share the factual arguments that could justify the necessity to do so. On the other end, different opinions of stakeholders and diversity of views persist on issues connected to new technologies which seem to suggest that the state of the (digital) technology is not sufficiently developed to the point where specific proposals could be made supported by a substantial body of evidence.

Although the use of Artificial Intelligence (AI) is not new, its development and use as part of digital technology evolves rapidly. This development is still at an early stage in much of the EU machine tool and manufacturing industry and it is important for the EU that it is encouraged. The Machinery Directive’s well known and used methodology to control risk does not need to be changed to accommodate this technology. This is because the principles of Risk Assessment and Risk Reduction (RA&RR) – in which the Machinery Directive is rooted – remain constant. The iterative combination of these principles is technology-neutral: RA&RR can be successfully applied to assess and decide whether any digital technology can be incorporated into machinery design in order to ensure that machinery is compliant with the Machinery Directive. Accordingly, no technology (including new digital applications) can be introduced in machinery design if it cannot be verified and validated – for all of the phases of a machine's life cycle – by the conformity assessment procedures (which always include RA&RR), described in Article 12 of the Machinery Directive.

Should we run the risk of revising the Machinery Directive (articles or annexes) to attempt to cover technological developments for which nobody can make sufficiently accurate forecasts? The Directive's health and safety requirement duties on the manufacturer deal adequately with developing technologies such as self-learning algorithms and robotics that are designed to work within safe boundaries and predetermined operational envelopes, as they can go through the RA&RR process before being placed on the market. Regrettably, many indicators point to a call to revise legislation to cover unrealistic projections and untestable applications such as autonomous robots able to learn without operating boundaries: for these products no risk assessment will be possible at the production phase as their future operation cannot be predicted, verified and validated. The safe control of such machines would require adaptive or dynamic risk management methods that are incompatible not only with the current safe design approach of the Machinery Directive, but also with the New Legislative Framework (NLF) and the product safety philosophy of the European Union. In other words, the Machinery Directive is perfectly able to differentiate between compliant and non-compliant digital applications by means of RA&RR.

At another level, many share their concerns about the unexplainable nature of decisions taken by current machine learning systems, and therefore the impossibility of tracking back to the root causes of an incident or accident of a machine steered by a machine-learning system. The fact that it is impossible to track back to the root causes seriously undermines the improvement of the cycle of safety based on the ex-post analysis of incidents, accidents and near misses.

"We should try to look at the interaction between technology, business interests and social expectations with a critical eye, because the digital transition means different things to different people", according to Stefano Boy, senior researcher at the ETUI. "Ideally, it would be important to find a "reset button" and ride that button constantly in order to filter the good signals from the noise of stereotypes around the digital revolution, and analyse critically the narrative that puts in the same basket the internet of things, AI, robotics, big data, automation, autonomous systems, standardization, ICT security. This narrative speaks in terms of emerging unknown risks, unknown challenges, disruptive developments, and calls for a new regulatory environment able to face the new accelerating industrial trend. Unfortunately, those blindly promoting this disruptive narrative - people alien to software engineering, testing, mathematical optimization, statistics, probability - are like travel agents, who send people in places they've never been themselves. Yes, not everyone is seduced by non linear equations. But the trouble with some group of experts is that they haven’t read the minutes of the previous meetings", adds the researcher.