A Paper Accepted at Informatik 2022 Workshop "Trustworthy AI in Science and Society"
21 June 2022, by Angelie Kraft

Photo: Gesellschaft für Informatik
The Informatik 2022 Workshop "Trustworthy AI in Science and Society" accepted the following paper:
- "Measuring Gender Bias in German Language Generation" - Angelie Kraft, Hans-Peter Zorn, Pascal Fecht, Judith Simon, Chris Biemann and Ricardo Usbeck
Abstract: Most existing methods to measure social bias in natural language generation are specified for English language models. In this work, we developed a German regard classifier on the basis of a newly crowd-sourced dataset. The test set accuracy meets the English predecessor’s. With the classifier, we measured binary gender bias in two large language models. The results indicate a positive bias towards female subjects for a German version of GPT-2 and no significant bias for GPT-3. Yet, upon qualitative analysis we found that positive regard partly corresponds to sexist stereotypes. Our findings suggest that the regard classifier should not be used as a single measure but, instead, combined with more qualitative analyses.
The paper will be made available in our "Publications" section. The code source will be published with the paper.
Angelie will present this work at the "Trustworthy AI in Science and Society" Workshop in September.