Skip to content

Lecture

The Theory of Artificial Immutability

Protecting Algorithmic Groups under Anti-Discrimination Law

Add to calendar 2022-10-17 15:00 2022-10-17 16:00 Europe/Rome The Theory of Artificial Immutability Zoom YYYY-MM-DD
Print

When

17 October 2022

15:00 - 16:00 CEST

Where

Zoom

In this policy talk, Sandra Wachter (University of Oxford) argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved.

Artificial intelligence is increasingly used to make life-changing decisions, including about who is successful with their job application and who gets into university. To do this, AI often creates groups that haven’t previously been used by humans. Many of these groups are not covered by non-discrimination law (e.g., ‘dog owners’ or ‘sad teens’), and some of them are even incomprehensible to humans (e.g., people classified by how fast they scroll through a page or by which browser they use).

This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected "fast scrollers" or "Safari users." Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups.

This paper argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved.

Go back to top of the page