AI: A European approach to excellence and trust
- The aim of achieving trust in AI through policy needs to be achieved with the goal of enabling its excellence in Europe in mind.
- Europe needs to support its bid for AI excellence through leveraging its digital frontrunners to share best practices, coordinate policy actions holistically and spend resources in key areas that would support a sound basis for businesses to foster AI development and roll-out (eg. 5G, cybersecurity, data infrastructure, R&I, digital skills and standardisation).
- Achieving trust in AI through legal means should consider that it is a suite of technologies in its early stages. Assessing existing laws and potential legal gaps would be a good first step in order to adjust existing laws before new ones are made.
- The scope of any new requirements should take a risk-based approach and only set market access requirements for “high-risk” AI. This should be defined to focus on where the highest and most widespread societal damage is likely to arise. This is a pragmatic start to achieve trust while enabling AI development to continue to flourish.
- Legal certainty, specific responsibilities for all actors involved and a clear framework for business compliance in the delivery of AI need to be ensured so that AI or a product using an AI is only covered by a single set of clearly assigned product safety rules. As a result, either this new legislation for “high-risk” AI or existing sector specific legislation under the New Legislative Framework should apply.
- A voluntary labelling system for AI not covered under this new legislation could be useful to enhance trust. But each scheme should be defined following a bottom-up approach, identifying minimum criteria to be used by organisations choosing to participate in the same ecosystem.
- The potential legal gap of new economic actors existing within “high risk” AI supply chains that cannot legally be defined as “producers” in the context of the Product Liability Directive (PLD) should be explored.