Skip to content

Conference

Artificial intelligence in the military domain and changing international norms

The ERC AutoNorms project

Add to calendar 2026-03-30 14:00 2026-03-30 16:00 Europe/Rome Artificial intelligence in the military domain and changing international norms Elinor Ostrom Room Palazzo Buontalenti YYYY-MM-DD
Print

Scheduled dates

Mar 30 2026

14:00 - 16:00 CEST

Elinor Ostrom Room, Palazzo Buontalenti

Organised by

The Florence School of Transnational Governance and AutoNorms host a roundtable discussion in which members of the AutoNorms project team will review key insights and findings gained throughout the project.

Roundtable abstract:

The military adoption of AI technologies and ways toward its potential governance have gained salience throughout the early 2020s. States and other stakeholders involved in the international debate increasingly recognise AI in the military domain as a multi-faceted issue that does not only concern weapon systems but also how military decision-making, including around the use-of-force, functions. This renewed attention is, not least, due to a combination of reported advances around generative AI models and the simultaneous growing use of these technologies in war zones such as Ukraine and Gaza. Weapon systems integrating autonomous and AI technologies, often referred to as autonomous weapon systems (AWS), raise the question of whether humans remain in meaningful control of decision-making in warfare. Over the past five years, the European Research Council-funded AutoNorms project has studied how AWS change international norms, especially around the use of force.

In this panel, organised in collaboration with the Florence School of Transnational Governance, members of the AutoNorms project team will review key insights and findings gained throughout the project.

The roundtable will start by covering the project’s main analytical framework and arguments - that practices of designing, of training personnel for, and of using weapon systems integrating AI technologies have shaped use-of-force norms, especially around what counts as an 'appropriate' quality of human control.

Empirically, it will cover insights gained from the project’s in-depth empirical research across China, Russia, and the United States and reflect on the methodologies used throughout, in particular, open-source analysis and participant observation.

Finally, the roundtable also reflects on the significance of the policy domain of AI governance in the military domain for the multi-order world. The event will consist of 10-15-minute presentations by each speaker, followed by an open discussion with the audience. 

Topics covered in the roundtable:

Ingvild Bode will reflect on the over-arching conceptual analysis conducted by AutoNorms. Recognising that international norms can take the form of both legal and social norms, the AutoNorms project has focused on changing social norms. Social norms are understandings of appropriateness that are often implicit, not written down, and typically not publicly discussed. But such social norms shape and communicate via practices what states consider ‘appropriate’ behaviour when it comes to AI in the military domain. AutoNorms’ work has found that integrating AI in weapon systems has substantively reduced the exercise of human agency in decision-making over the use of force – and has come to accept such a diminished role as 'appropriate' and 'normal'.

Guangyu Qiao-Franco and Qiaochu Zhang will discuss the data and research methods used in their empirical study on Chinese norms surrounding the military use of AI, an area that remains opaque and sensitive. They will highlight how their methods and behind-the-scenes research experiences shaped their findings, enabling them to challenge mainstream views, break echo chambers in research on China’s AI, and uncover perspectives that are often missed in Western analyses.

The talk emphasises not only the importance of engaging directly with Chinese-language sources (including the challenges of translating and interpreting them into English without misrepresentation), but also of having direct conversations with Chinese officials and experts based in China. Such exchanges are essential for understanding the intended goals, enforcement, and rationale behind Chinese policy documents. They will also share the challenges of producing and publishing this kind of work, especially when their insights diverge from narratives commonly advanced by the US and other Western think tanks.

Anna Nadibaidze will explore the project’s main findings in relation to the Russia case study, with a focus on empirical observations from Russia’s full-scale invasion of Ukraine. She will also highlight AutoNorms’ key policy impacts, including the team’s participation in governance processes and contributions to forums such as the United Nations Group of Governmental Experts on lethal autonomous weapons systems and the Responsible AI in the Military Domain summits. 

Hendrik Huelss and Tom F. A. Watts will discuss the ontological and epistemological challenges associated with studying the United States’ approach to the development, fielding, and global governance of military applications of AI. They will begin by contextualising the theoretical contributions that the International Relations literature on norms and visualisation can make to this intellectual endeavour. This will include a discussion of how these frameworks can help researchers make sense of the Trump administration’s recent approach to AI regulation and 'Big Tech' companies.

The talk’s focus then shifts to highlighting the importance of remaining intellectually sensitive to the United States’ prevailing culture of 'machine-mindedness' when studying its approach to technological governance. The authors will conclude by reflecting on how the stories told about intelligent machines in various popular culture franchises appear to shape how multiple audiences come to 'know' these technologies.

Trine Flockhart will reflect on how to make sense of the current governance landscape of AI in the military domain in the context of the multi-order world. This perspective examines the transformation of the global ordering architecture into a landscape of several international orders. 

About the speakers:

Ingvild Bode is professor of International Relations at the University of Southern Denmark (SDU). At SDU, she is the Director of the Center for War Studies. Ingvild is a RUSI Associate Fellow in Cyber and Tech and a Fellow of the Security and Technology Programme at the United Nations Institute for Disarmament Research (UNIDIR). She is the Principal Investigator of the ERC-funded AutoNorms project. Her research analyses processes of policy and normative change, especially in the areas of AI in the military domain, the use of force, and AI governance. Ingvild has published extensively in these areas, including in journals such as Big Data & Society, Ethics and Information Technology, the European Journal of International Relations and Review of International Studies. Ingvild served as Chair of the IEEE Research Group on AI and Autonomy for Defence Systems and as an expert member of the Global Commission on Responsible AI in the Military Domain.

Trine Flockhart is a full-time professor, holding the Chair in Security Studies at the Florence School of Transnational Governance. Professor Flockhart’s current research focuses on global (dis)order and processes of change and transformation, the crisis in the liberal international order, NATO and transatlantic relations, ontological security, constructivism, English School theory, and resilience. She has more than 100 publications with her main academic articles having appeared in journals such as Review of International Studies, Contemporary Security Policy, Journal of Common Market Studies, European Journal of International Relations, and International Relations.

Hendrik Huelss is an associate professor of International Relations at the Center for War Studies, University of Southern Denmark. He is also a collaborator in the 'HuMach' project on human-machine interactions in military AI. Hendrik’s research combines an interest in norms in International Relations with perspectives on technologies in politics. Hendrik has published in journals such as European Journal of International Security, Journal of European Public Policy, International Political Sociology, International Theory, and Review of International Studies. He is also co-author of Autonomous Weapons Systems and International Norms (McGill-Queen’s University Press, 2022 with Ingvild Bode).

Anna Nadibaidze is a postdoctoral researcher at the Center for War Studies, University of Southern Denmark. Her research explores AI technologies in international security and global governance of AI in the military domain. Her work has been published in journals such as Contemporary Security Policy, European Journal of International Security, and Journal of International Relations and Development.

Guangyu Qiao-Franco is an assistant professor of International Relations at Radboud University and Advisory Board Member at the Leiden Asia Centre. She is principal investigator of a Dutch MOFA-funded project on China’s export control strategy. Her research focuses on AI governance, export controls, and Global South perspectives in international politics.

Tom F.A. Watts is a postdoctoral research fellow based at the Center for War Studies, University of Southern Denmark and a previous Leverhulme Early Career Fellow based at Royal Holloway, University of London. His research examines the relationship between great power competition, security imaginaries, and military applications of AI. Tom’s research has been published in leading International Relations journals, including Contemporary Security Policy, Cooperation and Conflict, Defence Studies, Geopolitics, and International Politics.

Qiaochu Zhang is a Max Weber postdoctoral fellow at the Florence School of Transnational Governance, European University Institute. Her research focuses on global AI governance and Chinese foreign policy. Her work has been published in Cooperation and Conflict, Ethics and Information Technology, Global Policy, and International Affairs, among others.

Registrations are now open!

Related events

  • Read more

    Conference

    05 May 2026 10:00 - 11:00 CEST

    Martti Ahtisaari Peace Hall, Palazzo Buontalenti,

    Conference

    Florence School of Transnational Governance

    Opening of the EUI Climate Week

    Speakers:

    Nathalie Tocci (EUI Florence School of Transnational Governance and Istituto Affari Internazionali) Paul Rushton (NATO) Eva Krukowska (Bloomberg) Ambroise Fayolle

  • Read more

    Conference

    05 May 2026 11:30 - 13:00 CEST

    Martti Ahtisaari Peace Hall,

    Conference

    Florence School of Transnational Governance

    Europe’s clean industrial transformation: Where we are, what we need

    Speakers:

    Elisabetta Cornago (Centre for European Reform) William Todts (Transport & Environment) André Körner (ArcelorMittal) Ben McWilliams (Bruegel)

  • Read more

    Conference

    05 May 2026 11:30 - 13:00 CEST

    Elinor Ostrom Room, Palazzo Buontalenti,

    Conference

    Florence School of Transnational Governance

    How to gradually include EU candidate countries in the EU ETS?

    Speakers:

    Ivan Mrvaljevic (EPCG) Artur Lorkowski (Energy Community) Susanne Nies (Helmholtz Center Berlin) Christian Dietz (EUI)

Go back to top of the page