Speaking about today’s digital challenges, how does European public order guide the use of AI in addressing national security, human trafficking, and migrant smuggling?
The main challenge is that we currently lack tools to measure how effective or fair AI systems really are. While states are developing their own national AI tools for areas such as security and migration control, there is still no global consensus on how to assess the decisions these systems make.
European public order offers guidance through the principles established by the European Court of Human Rights. In Big Brother Watch and Others v. the United Kingdom (2021), the Court examined the UK’s system of bulk interception of online communications. It found that while states may collect data for national security, the UK’s approach lacked essential safeguards: independent authorisation before surveillance, continuous supervision, and impartial review afterwards. Without those checks, the system allowed excessive interference with privacy and was ruled incompatible with Article 8 of the European Convention on Human Rights.
These same principles apply to AI systems used in public security or migration management. Just as surveillance powers must be subject to independent control, AI tools that process large amounts of personal data should operate under clear legal limits and continuous oversight.
Do European courts consider algorithmic decisions as real ‘decisions’ under human rights law, especially when balancing privacy and security in new EU data rules?
When balancing privacy and security in the context of new EU data rules, Europe’s courts offer two different but complementary perspectives. On one hand, we have the European Court of Human Rights, which approaches technology through the lens of constitutionalism — the idea that human rights form part of Europe’s legal order, not just an international agreement between states. This view stems from the case Loizidou v. Turkey, where the Court affirmed that the European Convention on Human Rights is a constitutional framework for the protection of human rights across the continent.
On the other hand, the Court of Justice of the European Union focuses on how EU law, particularly the General Data Protection Regulation, governs automated decision-making and personal data. Together, these two courts shape how Europe defines accountability and legality in the age of algorithms.
Unlike the United States, where constitutional interpretation often interacts with powerful corporate interests, the European Union and the Council of Europe are expected to take a leading role in regulating AI systems operating within Europe. A clear example is the Entry/Exit System, due to a gradual roll-out over a six-month period across 29 European countries. By April 2026, it will replace the manual passport stamping system currently in use with an automated database registering biometric and personal data of non-EU nationals crossing EU borders. While the system aims to enhance efficiency and strengthen security, it also raises questions of proportionality, oversight, and compliance with data protection standards.
To what extent can constitutional protection in liberal democracies adapt to the challenges posed by new technologies like AI systems used in public security operations?
This adaptation could be achieved by updating how we define a ‘decision’ in law. When an automated system produces an outcome that affects someone’s rights, that process should be recognised as a legal decision, subject to judicial review and due process. In other words, constitutional protection must evolve to cover not just human decisions, but algorithmic ones too.
At the international level, Europe’s legal tradition already offers tools for balancing competing rights. The proportionality test, used by both European and national courts, ensures that one right – such as security – cannot be higher than another, like privacy. The goal is to harmonise these rights so that both can coexist within the rule of law.
Looking ahead, we might also need an ‘AI compatibility test’ to respond to these challenges and assess how public algorithms align with human rights principles, so everyone understands what happen and what the consequences are. Public AI tools should always include ex-ante safeguards, such as proportionality checks and data minimisation to ensure technological progress strengthens, rather than undermines, constitutional democracy.
Can AI governance models be effectively applied to security-related cases? Could you provide an example of an effective AI model or new technology model that works?
We already have examples in the European Court of Human Rights case law. For instance, the Big Brother Watch and Others v. the United Kingdom judgment, which applies six key safeguards for surveillance and data-processing systems: a clear legal basis, independent authorisation, continuous oversight, limits on data retention, access to redress, and regular auditing. The law must be clear enough to avoid arbitrariness, and investigative bodies must ensure objectivity and legitimacy in decision-making.
Although there is no international regulation or clear guidance yet, the Court’s approach reminds us that individuals must be able to challenge automated decisions, know when they occur, and seek remedies for violations of their rights. Cases such as Niemitz v. Germany show that almost every aspect of personal dignity can fall under human rights protection. The same principle should guide us when dealing with AI: When an algorithm causes harm, there must be clear guarantees and accountability.
Your research also focuses on Roma and Sinti minority communities. How does this relate to rights protection?
These issues are especially relevant for vulnerable groups such as Roma and Sinti communities, who continue to face discrimination in housing, education, and accessibility to other services. As governments adopt digital tools and AI-driven systems, these inequalities risk being reinforced rather than reduced.
According to Article 22 of the EU’s General Data Protection Regulation, authorities must seek consent to store personal data, yet most people do not know how algorithms generate decisions about them. This lack of interpretation and transparency contributes to inequalities.
That’s why I would like to propose clear standards to protect the rights of the Roma and Sinti communities in Europe. Unlike the US, which has a centralised mechanism of non-discrimination that allows for direct monitoring, Europe’s system remains decentralised and uneven.
Discrimination cases are complex, as they are often intertwined with other rights, such as private life or substantive rights. We must determine whether a situation is discriminatory or not, which raises the question: do we have sufficient guarantees? Is there a lack of safeguards?
Human rights law must fill those gaps by creating and monitoring effective protections. Each state should monitor the situation, amend its laws when necessary, train public authorities, and support journalists and civil society actors who can act as watchdogs. Building a strong human rights culture is essential to ensure that digital governance protects everyone.
What kind of methodology are you currently exploring at the EUI?
At the EUI, I’m in contact with various departments and units, benefiting from the Institute’s multidisciplinary setting. Together with the Robert Schuman Centre colleagues, I’m working on understanding the historical and political contexts behind today’s challenges in AI and governance.
Some professors note that Europe tends to be more conservative, taking a more restrained approach to global challenges compared to the US, which tends to be more active. Understanding this difference is key to developing effective AI governance within Europe’s constitutional framework.
I’m also involved in several projects that explore open-source software models at the EU level. For example, during the recent Securing Europe's open digital infrastructure seminar led by EUI Professor Thomas Streinz, we discussed how new technological implementations can support EU regulations. This is important because we should understand European values as powerful tools. The European public order provides a clear normative base, allowing us to protect our communities within and beyond Europe.
We also discussed how to regulate the interaction between political will, US companies, and European actors. As we can see, the EU is adopting many international regulations on AI, whereas the US is less active. Europe applies the principle of proportionality, while the US uses different judicial tests such as rational basis, intermediate and strict scrutiny.
In Europe, proportionality means balancing two rights to ensure neither dominates the other. States establish national consensus and monitor how it functions, which helps prevent violations and strengthen protections. It’s still unclear how to work effectively with US companies, but we’re paying attention to this.
To give an example: In places like Moldova, my country, aligning legislation with EU standards such as the GDPR helps create shared norms for privacy, data protection, and accountability across borders. This kind of legal harmonisation is essential for technological development remains consistent with European legal standards and respect for fundamental rights.
Ion Cojocari is a Fernand Braudel Fellow at the EUI Department of Law and an expert in human rights and law and technology, with international experience in legal education and advocacy. He holds an LLM from UC Berkeley and a PhD in Law from the State University of Moldova. His work as an attorney, former judge, and trainer has focused on human rights violations, legal reform, and the intersection of law and emerging technologies.
The Fernand Braudel Fellowship special call for applications was launched in the framework of the EUI Widening Europe Programme, which is supported by contributions from the European Union and EUI Contracting States. The programme is designed to strengthen internationalisation, competitiveness, and quality in research in targeted Widening countries.