Fairness und maschinelles Lernen: Algorithmische Diskriminierung und Ausbeutung als Herausforderung für das EU-Recht

Algorithms are fundamentally transforming the relationships and dynamics between market actors in many different contexts. The deployment of machine learning algorithms leads to a rapid increase in the patterns extrapolated from large data sets, which enables ever finer differentiations between individuals in advertising and contractual design. The law, however, is lagging behind in developing adequate responses to these shifts of knowledge and predictive capacities.

This project is devoted to the regulatory challenges resulting from this gap. It starts from the observation that not all differentiations driven by machine learning are benign; rather, some must be considered illegitimate in a society that strives to adhere to basic notions of fairness. Two cases merit special attention in this context: differentiation based on membership in protected groups (algorithmic discrimination) and differentiation based on the degree of cognitive vulnerability (algorithmic exploitation).

Against this background, the project aims to analyse, in a first step, which responses current positive EU and case law articulate to these two challenges; in particular, extant data protection, anti-discrimination, consumer and contract law are examined at the European level. In many cases, however, the remedies of the past will arguably not provide adequate solutions to the puzzles of the machine learning future: algorithmic discrimination and exploitation threaten to fall through the doctrinal cracks of a legal system, that was largely built for the analogous era. Furthermore, victims of discrimination or exploitation are often unaware of this fact, and lack the access to data and algorithms to prove their case. This leads to rampant enforcement deficiencies. More generally, the inscrutability of decision making driven by machine learning prevents effective legal ex post control of unfair practices. Finally, the relationship between data protection law (e.g., the GDPR) and general contract law (e.g., the Unfair Terms Directive) is unclear at best.

Hence, in a second step, the project will explore novel regulatory tools to counter algorithmic discrimination and exploitation. For example, regulation may impose rules of algorithmic fairness in order to mitigate illegitimate differentiation ex ante, before self-learning algorithms are even deployed in the marketplace.

Hacker, Philipp Niklot (Details) (BR / Deutsches und internationales Privat- und Wirtschaftsrecht)

Beteiligte externe Organisationen

Projektstart: 01/2019
Projektende: 12/2020



Zuletzt aktualisiert 2021-04-01 um 17:48