Algorithmic Fairness and Discrimination Discovery in People Search EnginesInvited Talk by Meike ZehlikeAlgorithmic Fairness and Discrimination Discovery in People Search Engines
23 March 2017, by Reinhard Zierke
The LT group is happy to announce:
Date: March 23, 2017
Time: 13:30- 14:30
Location: Informatikum F433
Speaker: Meike Zehlike
Title:
Algorithmic Fairness and Discrimination Discovery in People Search Engines
Abstract:
People search engines are increasingly common for job recruiting, for finding a freelancer, and even for finding companionship or friendship. As in other cases, a top-k ranking algorithm is used to find the most suitable way of ordering the items (persons, in this case), considering that if the number of people matching a query is large, most users will not scan the entire list. Conventionally, these lists are ranked in descending order of some measure of the relative quality of items (e.g. years of experience or education, upvotes, or inferred attractiveness). Unsurprisingly, the results of these algorithms potentially have an impact on the people who are ranked, and contribute to shaping the experience of everybody online and offline. It is therefore of societal and ethical importance to ask whether these algorithms eventually produce results that demote, marginalize, or exclude individuals belonging to an unprivileged group or a minority. These algorithms may have discriminatory effects, even in the absence of discriminatory intent, imposing a less favorable treatment to already disadvantaged groups.
In this talk I’ll explain the Fair Top-k Ranking problem, in which I want to determine a subset of k candidates from a large pool of n >> k candidates, such that I select the “best” candidates subject to group and individual fairness criteria. The ranked group fairness definition extends group fairness using the standard notion of protected group, (e.g. “people with disabilities”) and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking is statistically indistinguishable from a target proportion. Ranked individual fairness is operationalized in two ways: (i) every candidate included in the top-k should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k the more qualified candidate should be ranked above.
Bio:
Meike Zehlike is a PhD researcher at the Complex and Distributed IT Systems Group of Technische Universität Berlin in Berlin, Germany. In 2014, she received her diploma degree for her work on recognition of perfusion disorders and vascular pathologies at the cerebral cortex. From 2014 until 2016 she worked as software developer and scrum master in Berlin. She started her PhD in April 2016. Meike’s research interests center around artificial intelligence and its social impact, automatic discrimination discovery and algorithmic fairness, as well as the use of artificial intelligence in medical applications. Currently, Meike is part of the DFG-funded graduate school SOAMED.