Artificial intelligence and adaptive systems, that learn patterns from past behavior and historic data, play an increasing role in our day-to-day lives. We are surrounded by a vast amount of algorithmic decision aids, and more and more by algorithmic decision making systems, too. As a subcategory, ranked search results have become the main mechanism, by which we find content, products, places, and people online. Thus their ordering contributes not only to the satisfaction of the searcher, but also to career and business opportunities, educational placement, and even social success of those being ranked. Therefore researchers have become increasingly concerned with systematic biases and discrimination in data-driven ranking models. To address the problem of discrimination and fairness in the context of rankings, three main problems have to be solved: First, we have to understand the philosophical properties of different ranking situations and all important fairness definitions to be able to decide which method would be the most appropriate for a given context. Second, we have to make sure that, for any fairness requirement in a ranking context, a formal definition that meets such requirements exists. More concretely, if a ranking context, for example, requires group fairness to be met, we need an actual definition for group fairness in rankings in the first place. Third, the methods together with their underlying fairness concepts and properties need to be available to a wide range of audiences, from programmers, to policy makers and politicians.
This work contributes the following to solve the aforementioned problems, which I will cover in-depth in the talk:
Five Classification Contexts of Fairness: We identify the fairness properties of all important fairness definitions, including the ones we newly introduce, by relating them to different philosophical understandings of fairness. We introduce five concepts, by which we classify all works that we present in this work.
Fair Ranking Methods: We present two group-fairness-based frameworks, an in-processing, exposure-based approach, and a post-processing, probabilistic approach.
An Open-Source API: We implement our new fairness frameworks into the first open-source library for fairness in ranked search results, as a stand-alone programming library in Python and Java, as well as a plugin for the widely known search-engine “Elasticsearch”.
Meike Zehlike is a Senior Applied Scientist at Zalando Research in the Algorithmic Privacy and Fairness team, and an ethical AI consultant. She earned her Ph.D. in computer science at Humboldt Universität zu Berlin in 2022, working under Ulf Leser (HU), Carlos Castillo (UPF Barcelona), and Krishna Gummadi (MPI-SWS Saarbrücken). She received a prestigious PhD research grant from the Data Transparency Lab in 2017 and several awards such as the Google Women Techmaker Award in 2019. Her research interests center around artificial intelligence and its social impact, automatic discrimination discovery and algorithmic fairness, as well as the use of artificial intelligence in medical applications.