This framework contains policy recommendations to chart a new course towards a fairer, more equitable AI ecosystem.
The global event also served as an occasion to debate the complexities of artificial intelligence (AI) via local panel debates, organised in collaboration with the partners of the initiative: Florence School of Transnational Governance (European University Institute, Italy), Center for Human-Compatible Artificial Intelligence (UC Berkeley, United States), Institute of Education, Development and Research (Brazil), Research ICT Africa (South Africa), and Paris School of International Affairs, Tech and Global Affairs Innovation Hub (SciencesPo, France).
AI in the information and communication space
The relentless march of AI across the global information and communication landscape heralds both promise and peril. As AI reshapes how information is created, disseminated, and consumed, it also poses profound challenges to democratic processes and fundamental rights.
Recent events underscore the disruptive potential of AI in political processes, from deepfakes influencing voting behaviour to misinformation propagated by AI-driven chatbots. Yet, AI also holds untapped potential to enhance news production, data analysis, and information access for societal benefit.
During the event, Pier Luigi Parcu, Director of the Centre for a Digital Society (CDS) and the Centre for Media Pluralism (CMPF) at the EUI, pointed out a key obstacle to responsible AI deployment: “AI is not a public good, it’s a very private good. [But] there is an aspiration for it to be public”.
Michael Bąk, Executive Director of the Forum on Information and Democracy broke down how the policy recommendation set out to change that:
“Democracies must stop allowing tech companies to dictate the trajectory of technology, to capture the policy narrative and to set the agendas. Solutions exist to build a global information and communication space conducive to democracy, that creates value for people not only as consumers but first and foremost as citizens. We are presenting these solutions today.”
Central to the framework is a series of recommendations to foster inclusive, accountable, and transparent AI systems: “We call this Fair Trade AI,” noted Bak. These recommendations advocate for the establishment of a tailored certification system inspired by Fair Trade principles, empowering users with alternatives to recommender systems that prioritise societal well-being, and enshrining individuals' rights to transparency and non-discrimination in AI interactions.
Furthermore, the report calls for participatory processes to determine rules governing AI systems, emphasising the importance of inclusivity and transparency in decision-making.
Keeping AI accountable
The Florence panel, held in Palazzo Buontalenti, saw the participation of EUI experts Marta Cantero Gamito (Florence School of Transnational Governance), Pier Luigi Parcu (Center for a Digital Society), Lisa Ginsborg (European Digital Media Observatory), and Margot Kaminski (Department of Law).
The panellists discussed many issues relating to AI: ownership, ethics, fundamental rights, and liability. The policy recommendations presented are a concrete effort to tackle these issues and shape the future trajectory of AI governance as a technology able to benefit the public and aid the democratic process.
Florence STG Research Fellow, Marta Cantero Gamito, spoke about the potential of this policy framework: “By endorsing these recommendations, member states are committing to a future where AI serves as a force for good, guided by democratic principles and ethical considerations. Over the last few years, it has become clear that AI cannot operate in a vacuum. Therefore, while we wait for new rules we must actively incentivise responsible and ethical AI practices. These recommendations are a step in this direction and shift from ideas to action, guiding policy-makers, companies and stakeholders worldwide in safeguarding democracy and fostering Fair Trade AI.”
Other members of the Florence panel, Lisa Ginsborg and Margot Kaminski, also called for increased accountability of AI.
Lisa Gisbourg (EDMO) spoke about the possibility of a collaboration between developers, deployers and fact-checkers to counter disinformation. However, she insisted that for this to be a multi-stakeholder process, “there needs to be transparency on all fronts (developers, platforms, fact-checkers, etc) and a will to share good practices and data among all involved.”
Margot Kaminski, a Senior Fellow in the Department of Law at the EUI, also highlighted how the report successfully “constructed AI large language models as complex, risky systems that pose a threat to democracy or human rights and [as such] need risk regulation”.
Read more about the targeted efforts of the Policy Working Group to shape a positive AI ecosystem in the global information space by reading the "AI as a Public Good: Ensuring Democratic Control of AI in the Information Space" report.