The European Union (EU) is about to become the first polity to adopt a comprehensive approach to regulating artificial intelligence (AI). With the forthcoming AI Act, the EU will put into force a legal instrument that classifies AI systems and models based on the risks associated with their use and requires AI providers and deployers to adopt measures addressing those risks. Because these measures have a highly technical character, the Act requires extensive technological expertise of both regulated actors and the authorities that will enforce it. The AI Act includes various regulatory mechanisms that respond to this demand for technical expertise and therefore ensure the effective enforcement of its requirements. This conference focuses on the most salient of those mechanisms: technical standardisation.
Within the AI Act, conformity with harmonised technical standards is not mandatory, but it creates a presumption of compliance with the legal requirements covered by those standards. This aspect of the Act is not a novelty, as it has been present in various forms of technical regulation that the EU has adopted over the past decades. In those various contexts, the legal status of standardisation has been criticised as a de facto outsourcing of regulatory power from the EU to private standardisation bodies, which can be questioned as to its legitimacy and lack of public accountability mechanisms. The AI Act compounds these issues by extending the scope covered by such technical standards. In addition to the matters of health and safety traditionally within the purview of harmonised standards, the Act expects AI-specific standards to protect fundamental rights, as well as such broad values as democracy, the rule of law, and the protection of the environment. As a result, the EU’s approach to AI standardisation raises additional questions about whether these private, technical bodies are the appropriate sites for setting norms that govern critical public interests.
Against this backdrop, the conference is aimed at kickstarting a reflection on how processes of technical standardisation in the AI Act will affect the act’s enforcement and thereby ensure the protection of fundamental rights, democracy, and the rule of law, as well as the accountability of all actors (public or private) involved in AI regulation. The conference is divided into two panels. The first panel has an institutional focus, seeking to discuss the role of different institutions in the governance of technical standardisation in the AI Act; and how accountability, legitimacy, and adequate fundamental rights protection can be ensured in such a governance system. The second panel discusses the relation between technical standardisation in the AI Act and the values and public interests the Act is meant to promote, such as democracy, the protection of fundamental rights, and the EU’s positioning in the AI’s global landscape.
This event has been organised under the umbrella of the EUI Interdisciplinary Research Cluster on Digital transformations and society (DigiCluster), in collaboration with the EU Law Working Group and Digital Public Sphere Working Group.
The event will be followed by a cocktail reception at 16:00 in Sala Bandiere, Villa Schifanoia.