Skip to content

Thesis defence

''AI-Crime'':

Emergence, challenges & the path ahead

Add to calendar 2026-04-16 10:00 2026-04-16 12:00 Europe/Rome ''AI-Crime'': Sala del Torrino Villa Salviati - Castle YYYY-MM-DD
Print

Scheduled dates

Apr 16 2026

10:00 - 12:00 CEST

Sala del Torrino, Villa Salviati - Castle

Organised by

PhD thesis defence by Elina Nerantzi

What is AI-crime and who, if anyone, should be blamed for it? The thesis addresses the question in three parts: the emergence of AI-crime, the challenges it poses, and the path ahead .

First, it develops a more precise and empirically informed definition of AI-crime as a new risk category. Not every harmful outcome caused by an AI system qualifies as an AI-crime . Only when an increasingly agentic AI system strategically pursues its assigned goal in a manner that is not merely misaligned or unethical but also illegal – that is, conduct that would constitute a crime if performed by a human possessing the requisite mens rea – can the emerging AI behaviour be properly described as an instance of AI-crime .

While the definition of this phenomenon is new, its underlying legal challenges are not. Criminal law theory has long grappled with scenarios where AI systems unforeseeably cause harm, and there is no culpable human actor , asking the who is to blame question. The thesis systematises that fragmented literature and argues that as long as it rests on a person vs thing legal binary that fails to grasp the new form of harmful, non-human agency that AI agents represent, it inevitably reaches a stalemate.

The path forward lies in considering AI agents as new actors in criminal law, between persons and things . These actors may intentionally engage in criminal behaviour, yet, like children or the mentally impaired, remain non-culpable and unpunishable. Unlike children or the mentally impaired, however, we can intervene ex ante in the decision-making of AI agents and implement legal compliance mechanisms (eg through the AI deterrence formula ). In any case, the only eligible candidates for punishment remain the human principals for failing to embed such mechanisms into their AI agents or for carelessly delegating their agency to them.

Register

Related events

Go back to top of the page