Skip to content
Robert Schuman Centre for Advanced Studies - European University Institute

Europe’s digital regulation faces its moment of truth

Last week, Florence became a meeting ground for regulators, policymakers, and scholars seeking to effectively interpret and operationalise Europe’s complex digital rulebook, whose enforcement rests on the cooperation among diverse regulatory authorities, both nationally and at the EU level.

13 November 2025 | Event - Policy dialogue

Florence Forum of Digital Regulators

The occasion was the Second Florence Forum on Digital Regulation, co-organised by the Centre for a Digital Society (CDS) and the Centre for Media Pluralism and Media Freedom (CMPF) at the European University Institute, in partnership with Italian Authority for Electronic Communications (AGCOM) and the University of Florence. The event, which took place on November 7, is part of the Florence Observatory on Digital Regulation (FLODIR), a joint initiative that is rapidly becoming a European hub for reflection on the governance of the digital sphere.

Discussions centred on how to make sense of the EU’s dense regulatory framework in practice, with participants sharing interpretations and experiences from across member states to find coherence amid complexity. In particular, speakers and participants focused on the challenges faced by national regulators in enforcing the AI Act and the Digital Services Act (DSA), which represent core pillars of the growing EU digitalisation acquis.

The enforcement of the AI Act

The Forum opened with a keynote by Lucilla Sioli, Director of the European AI Office, on the architecture of the AI Act. Her speech came at a timely moment, following the recent entry into force of the governance chapter of the AI Act in August 2025.

Discussing the role of the AI Office and the national competent authorities in enforcing the AI Act, Sioli described the months ahead as “a defining moment for Europe’s digital governance.” She identified three key priorities for the years ahead: strengthening national administrative capacity in the enforcement of the AI Act, utilising regulatory sandboxes to support innovation, and ensuring that Europe’s digital transition remains rooted in democratic values.

Yet turning rules into action is proving far from straightforward. As all speakers throughout the day underlined, EU digital regulations must work together to protect fundamental rights — a task complicated by uneven resources, independence, and capacities among national regulators, as well as by the growing need for cross-border cooperation without sufficient infrastructure to support it.

The challenge of coordination

The first panel, chaired by Andrea Simoncini (University of Florence), explored how to operationalise the AI Act’s governance system. All panellists pointed to one key condition for success: close coordination among market surveillance authorities, notified bodies, and fundamental rights agencies. Moreover, this collaboration will need to be backed by adequate resources for the authorities tasked with enforcing the AI Act.

Under Article 70(3) of the AI Act, in fact, designated authorities must ensure the availability of personnel with expertise spanning AI technologies, data and data computing, personal data protection, cybersecurity, and fundamental rights. As speakers noted, ensuring such a range of skills within a single body will be a challenge for any authority in Europe.

Marcello Albergoni presented the Italian case, where the Italian Cybersecurity Agency was recently designated as the competent authority to enforce the AI Act, alongside the Italian Agency for Digitalisation. On the other hand, Gerald Hopster (Dutch Data Protection Authority) and Julia Böhme (BNetzA, Germany) discussed the situation in the Netherlands and Germany, where no competent authority has yet been designated, despite the August 2025 deadline having already passed.

The new rules of political communication

The second panel, moderated by Giacomo Lasorella (AGCOM), focused on the fast-evolving field of electoral and political communication, as well as political advertising, which is increasingly taking place online — driven by personal data, microtargeting techniques, and influencer-based amplification and persuasion.

These relatively new avenues in political advertising may contribute to informing, persuading, and mobilising voters in democratic processes, especially youth and those disengaged from traditional politics. However, without effective regulation and oversight that prevents opaque and manipulative practices or domination by a few players, they can also distort pluralism and harm democracy.

The new EU Regulation on the Transparency and Targeting of Political Advertising (TTPA) strengthens transparency requirements for political advertising across all media and online platforms. It also covers influencers and issue-based advertising, which represents a more covert way of using ideological or social topics in sponsored content and campaigns to influence voting behaviour.

The panel noted that all major online platforms and search engines have either introduced or maintained bans on political advertising in the EU in response to the TTPA. Meta, for example, justified its decision by citing the “unworkable requirements and legal uncertainties” introduced by the regulation.

Iva Nenadić (CMPF) pointed out that this move is largely platforms’ political response, as the transparency obligations — although detailed — actually place a greater burden on traditional media, which have far fewer technical tools and solutions to make transparency labels user-friendly. She further questioned whether, given that very large online platforms and search engines have become the primary spaces for and gateways to political advertising for an increasing number of citizens, such bans should be considered within the scope of the Digital Services Act (DSA) systemic risk assessment, particularly under risks related to civic discourse, freedom of expression, and media pluralism.

The specific relationship between the TTPA, the DSA, and the Code of Practice on Disinformation — which functions as a compliance instrument integrated into both frameworks — has also been examined by Susanne Lackner (Vice-President of KommAustria, the Austrian Communications Authority) and Stanislav Matějka (Slovak Council for Media Services – CMS).

Paolo Cesarini (EUI) presented a cross-country analysis of electoral advertising on Meta and Google during the 2024 European elections, conducted by the European Digital Media Observatory (EDMO) network, which highlighted significant shortcomings in the transparency reporting practices of these major platforms. The Media Pluralism Monitor also shows a lack of effectiveness or determination by online platforms in ensuring the transparency of online political advertising in the majority of the EU countries.

Protecting minors in the online environment

The Forum’s final session, chaired by Elda Brogi (CMPF), turned to one of Europe’s most sensitive policy frontiers: protecting minors online.

Sandro Ruotolo, Member of the European Parliament and rapporteur for the EP report on Impact of social media and the online environment on young people, called for harmonised rules on age verification, influencer marketing, and advertising aimed at young users.

“Digital platforms are no longer just communication tools,” Ruotolo warned. “They have become spaces where young people live, learn, and grow — and often face real risks.”

Examples from across Europe showed what effective enforcement can deliver results. In France, noted Frédéric Bokobza (ARCOM), age-assurance requirements introduced in 2025 led to several major adult websites suspending their services until they became compliant, a clear sign that regulation can protect without stifling access.

Contributions from Rossella Andronico (EDPS), John Evans (Irish Media Commission), and Kostas Masselos (EETT, Greece) echoed the same message: effective age-screening systems are the only effective way to safeguard minors from harmful online content.

Towards a rights-based digital order

By bringing together regulators, academics, and policymakers, the Florence Forum highlighted that Europe’s digital toolbox is not just a set of rules, but a blueprint for how technology should serve society. Across sessions, a clear vision stood out: enforcement must go hand in hand with protecting rights, fostering innovation, and shaping online spaces that are safe.

For millions of Europeans whose lives increasingly unfold online, this is more than policy. As many speakers noted, platforms are now environments for learning, identity-building, and civic participation — but also for risk. Ensuring they are safe and transparent is therefore a democratic imperative and not a technical exercise.

Go back to top of the page