Meike Zehlike

Senior Applied Scientist at Zalando SE
Algorithmic Justice Consultant

About me

I am a Senior Applied Scientist for algorithmic fairness and privacy at Zalando SE Berlin and an AI consultant for algorithmic discrimination. As a consultant I successfully advised the works council of Ford Germany, and the IG Metall on potential discrimination through algorithmic decision making in human resource management.

Until 2021 I was a Ph.D. student at Humboldt Universität zu Berlin and the Max Planck Institute for Software Systems (MPI-SWS) in the Social Computing Research group, where I developed several fair ranking methods, that incorporate substantive equality of opportunity into their objectives. I was advised by Ulf Leser, Carlos Castillo and Krishna Gummadi. I was a visiting researcher at WSSC with Carlos Castillo, UPF Barcelona, Spain in 2018 and at VIDA lab with Julia Stoyanovich, New York University, USA in 2019. I completed my Diploma degree in Computer Science at the Technische Universität Dresden with Nico Hoffmann and Uwe Petersohn as my advisors, where I developed a machine learning algorithm to recognize vascular pathologies in thermographic images of the brain. I studied Computer Science at MIIT (МИИТ) in Moscow, Russia in 2009/2010, and at INSA in Lyon, France 2010.

Selected Publications

For full list of publications, kindly check my Google Scholar or DBLP Profile.

Journals

  • Fairness in Ranking: A Survey
  • Under Submission in ACM Journal on Computing Surveys. March 2021.
  • Abstract: In the past few years, there has been much work on incorporating fairness requirements into algorithmic rankers, with contributions coming from the data management, algorithms, information retrieval, and recommender systems communities. In this survey we give a systematic overview of this work, offering a broad perspective that connects formalizations and algorithmic approaches across subfields. An important contribution of our work is in developing a common narrative around the value frameworks that motivate specific fairness-enhancing interventions in ranking. This allows us to unify the presentation of mitigation objectives and of algorithmic techniques to help meet those objectives or identify trade-offs.

  • Fair Top-k Ranking with multiple protected groups
  • Information Processing & Management. Special Issue on Fairness and Bias in Recommendation and Search. Elsevier. Volume 59, Issue 1, January 2022.
  • Abstract: Ranking items or people is a fundamental operation at the basis of several processes and services, not all of them happening online. Ranking is required for different tasks, including search, personalization, recommendation, and filtering. While traditionally ranking has been aimed solely at maximizing some global utility function, recently the awareness of potential discrimination for some of the elements to rank, has captured the attention of researchers, which have thus started devising ranking systems which are non-discriminatory or fair for the items being ranked. So far, researchers have mostly focused on group fairness, which is usually expressed in the form of constraints on the fraction of elements from some protected groups that should be included in the top-𝑘 positions, for any relevant k. These constraints are needed in order to correct implicit societal biases existing in the input data and reflected in the relevance or fitness score computed. In this article, we tackle the problem of selecting a subset of k individuals from a pool of n >> k candidates, maximizing global utility (i.e., selecting the "best" candidates) while respecting given group-fairness criteria. In particular, to tackle this Fair Top-k Ranking problem, we adopt a ranked group-fairness definition which extends the standard notion of group fairness based on protected groups, by ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above, or indistinguishable from, a given minimum threshold. Our notion of utility requires, intuitively, that every individual included in the top-k should be more qualified than every candidate not included; and that for every pair of candidates in the top-k, the more qualified candidate should be ranked above. The main contribution of this paper is an algorithm for producing a fair top-k ranking that can be used when more than one protected group is present, which means that a statistical test based on a multinomial distribution needs to be used instead of one for a binomial distribution, as the original FA*IR algorithms does. This poses important technical challenges and increases both the space and time complexity of the re-ranking algorithm. Our experimental assessment on real-world datasets shows that our approach yields small distortions with respect to rankings that maximize utility without considering our fairness criteria.

  • Matching code and law: achieving algorithmic fairness with optimal transport.
  • Data Mining and Knowledge Discovery. Springer. Volume 34, Issue 1, January 2020.
  • Abstract: Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the continuous fairness algorithm (CFA𝜃) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate “worldviews” on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of “we’re all equal” and “what you see is what you get” proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (credit applications; college admissions; insurance contracts) and map out the legal and policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence. Finally, we evaluate our model experimentally.

Conference Proceedings

  • Towards a Flexible Framework for Algorithmic Fairness
  • Proceedings of INFORMATIK 2020. Karlsruhe, Germany. October, 2020.
  • Abstract: Increasingly, scholars seek to integrate legal and technological insights to combat bias in AI systems. In recent years, many different definitions for ensuring non-discrimination in algorithmic decision systems have been put forward. In this paper, we first briefly describe the EU law framework covering cases of algorithmic discrimination. Second, we present an algorithm that harnesses optimal transport to provide a flexible framework to interpolate between different fairness definitions. Third, we show that important normative and legal challenges remain for the implementation of algorithmic fairness interventions in real-world scenarios. Overall, the paper seeks to contribute to the quest for flexible technical frameworks that can be adapted to varying legal and normative fairness constraints.

  • Reducing disparate exposure in ranking: A learning to rank approach
  • Proceedings of The Web Conference 2020 (WWW'20). Taipeh, Taiwan. April, 2020.
  • Abstract: Ranked search results have become the main mechanism by which we nd content, products, places, and people online. Thus their ordering contributes not only to the satisfaction of the searcher, but also to career and business opportunities, educational placement, and even social success of those being ranked. Researchers have become increasingly concerned with systematic biases in data-driven ranking models, and various post-processing methods have been proposed to mitigate discrimination and inequality of opportunity. This approach, however, has the disadvantage that it still allows an unfair ranking model to be trained. In this paper we explore a new in-processing approach: DELTR, a learning-to-rank framework that addresses potential issues of discrimination and unequal opportunity in rankings at training time. We measure these problems in terms of discrepancies in the average group exposure and design a ranker that optimizes search results in terms of relevance and in terms of reducing such discrepancies. We perform an extensive experimental study showing that being “colorblind” can be among the best or the worst choices from the perspective of relevance and exposure, depending on how much and which kind of bias is present in the training set. We show that our in-processing method performs better in terms of relevance and exposure than a pre-processing and a post-processing method across all tested scenarios.

  • FairSearch: A Tool For Fairness in Ranked Search Results
  • Companion Proceedings of the Web Conference 2020 (WWW'20), Taipeh, Taiwan. April 2020.
  • Abstract: Ranked search results and recommendations have become the mainmechanism by which we find content, products, places, and people online. With hiring, selecting, purchasing, and dating being increasingly mediated by algorithms, rankings may determine business opportunities, education, access to benefits, and even social success. It is therefore of societal and ethical importance to ask whether search results can demote, marginalize, or exclude individuals of unprivileged groups or promote products with undesired features. In this paper we present FairSearch, the first fair open source search API to provide fairness notions in ranked search results. We implement two well-known algorithms from the literature, namely FA*IR (Zehlike et al., 2017) and DELTR (Zehlike and Castillo, 2018)and provide them as stand-alone libraries in Python and Java. Additionally we implement interfaces to Elasticsearch for both algorithms, a well-known search engine API based on Apache Lucene. The interfaces use the aforementioned Java libraries and enable search engine developers who wish to ensure fair search results of different styles to easily integrate DELTR and FA*IR into their existing Elasticsearch environment.

  • Two-sided fairness for repeated matchings in two-sided markets: A case study of a ride-hailing platform
  • Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD'19). Anchorage, Alaska, USA. August, 2019.
  • Abstract: Ride hailing platforms, such as Uber, Lyft, Ola or DiDi, have traditionally focused on the satisfaction of the passengers, or on boosting successful business transactions. However, recent studies provide a multitude of reasons to worry about the drivers in the ride hailing ecosystem. The concerns range from bad working conditions and worker manipulation to discrimination against minorities. With the sharing economy ecosystem growing, more and more drivers financially depend on online platforms and their algorithms to secure a living. It is pertinent to ask what a fair distribution of income on such platforms is and what power and means the platform has in shaping these distributions. In this paper, we analyze job assignments of a major taxi company and observe that there is significant inequality in the driver income distribution. We propose a novel framework to think about fairness in the matching mechanisms of ride hailing platforms. Specifically, our notion of fairness relies on the idea that, spread over time, all drivers should receive benefits proportional to the amount of time they are active in the platform. We postulate that by not requiring every match to be fair, but rather distributing fairness over time, we can achieve better overall benefit for the drivers and the passengers. We experiment with various optimization problems and heuristics to explore the means of achieving two-sided fairness, and investigate their caveats and side-effects. Overall, our work takes the first step towards rethinking fairness in ride hailing platforms with an additional emphasis on the well-being of drivers.

  • Fa*ir: A fair top-k ranking algorithm
  • Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM'17). Singapore. November, 2017.
  • Abstract: In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n » k candidates, maximizing utility (i.e., select the "best" candidates) subject to group fairness criteria. Our ranked group fairness definition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above or indistinguishable from a given minimum. Utility is operationalized in two ways: (i) every candidate included in the top-k should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k, the more qualified candidate should be ranked above. An efficient algorithm is presented for producing the Fair Top-k Ranking, and tested experimentally on existing datasets as well as new datasets released with this paper, showing that our approach yields small distortions with respect to rankings that maximize utility without considering fairness criteria. To the best of our knowledge, this is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.

Systems and Applications

  • Discrimination In Employee Development: A consultancy project on potential algorithmic discrimination in employee promotion together with Germany's largest labor union, the IG Metall.
  • FairSearch: A set of tools for ranking post-processing (FA*IR) and in-processing (DELTR) with fairness constraints.
  • Fairness Measures: Datasets and software for detecting algorithmic discrimination.

Awards and Scholarships

  • (2019) Generation Google Scholarship EMEA: a 7,000 EUR award for the impact on diversity, demonstrated leadership and strong academic background.
  • (since 2018) ELLIS PhD Fellow: Research Network of the European Laboratory for Learning and Intelligent Systems.
  • (2017) Data Transparency Lab Research Grant: Grant of 50,000 EUR for the design and implementation of web tools that enable fairness accountability and transparency in machine learning systems.
  • (2016) SOAMED Graduate School: PhD school on service-oriented Architectures for the Integration of Software-based Processes, exemplified by Health Care Systems and Medical Technology
  • (2010) Femtec Network Career Building Scholarship: Career building program for female future leaders from science, technology, engineering and mathematics
  • (2010) Erasmus Scholarship: Semester abroad in Lyon, France
  • (2009) DAAD "GoEast" Scholarship: Semester abroad in Moscow, Russia
  • (2009) DAAD Summer School Tomsk/Moscow Scholarship: Language summer school in Tomsk and Moscow, Russia

Invited Talks

  • (2021) Diskriminierung durch (lernende) Algorithmen. Engineering und IT-Tagung 2021. Hans-Böckler-Stiftung. Chemnitz, DE.
  • (2021) Matching Code and Law. Workshop on Formalizations of Fair AI, Gesellschaft für Informatik. virtual venue
  • (2021) Panel: Oracle AI Round Table. Oracle, virtual venue.
  • (2021) Wenn der Algorithmus dich nicht für fähig hält – Ein Einblick in diskriminierende Algorithmen und fair AI. Soroptimist International Deutschland, virtual venue.
  • (2021) Gleichberechtigung von Frauen im Berufskrankheitenverfahren. Senatsverwaltung für Integration, Arbeit und Soziales, virtual venue.
  • (2021) Fairness-Aware Ranking Algorithms. University of Glasgow, Information Retrievel Group, UK
  • (2020) Matching Code and Law. Bias@MD4SG, virtual venue.
  • (2020) Bias in AI. Track Keynote at INFORMATIK 2020, Karlsruhe, DE.
  • (2020) Fairness in Algorithmic Decision Making. FTA Live 2020, Berlin, DE.
  • (2020) Panel: Wie wird künstliche Intelligenz geschlechtergerecht. Podiumsdiskussion. Berlin, DE.
  • (2019) Matching Code and Law. Columbia University, New York City, NY, US.
  • (2019) Matching Code and Law. IBM Research, Yorktown, NY, US.
  • (2019) Disparate Exposure in Learning to Rank. Microsoft Research, New York City, NY, US.
  • (2019) Fairness-Aware Ranking Algorithms. CapGemini, DE.
  • (2019) Fairness in Algorithmic Decision Making. Yale University, New Haven, CT, US.
  • (2019) Fairness-Aware Ranking Algorithms. Technische Universität Berlin. Berlin, DE.
  • (2019) Panel: Brauchen wir mehr Diversität im Datenjournalismus. nr19 Jahreskonferenz. Hamburg, DE
  • (2019) Fairness-Aware Ranking Algorithms. Freie Universität Berlin. Berlin, DE.
  • (2018) Fairness-Aware Ranking Algorithms. RWTH Aachen. Aachen, DE.
  • (2017) Frameworks of Bias in Computer Systems and their Application in Rankings. Workshop on Data and Algorithmic Bias. DAB'17. Singapore.
  • (2017) Panel: Algorithmic Fairness and Bias in Data. Workshop on Data and Algorithmic Bias. DAB'17. Singapore.
  • (2017) On Fairness in Ranked Search Algorithms. Universität Hamburg. Hamburg, DE

Press Coverage

Here is some coverage of newspaper articles written by me on the topic of algorithmic fairness, interviews, and TV appearances.

Newspaper Articles
TV Appearances and Podcasts

Professional Service

  • Reviewer, Elsevier Journal on Information Systems
  • PC Member, BIAS2021
  • PC Member, EAAMO2021
  • PC Member, FAccTRec2021
  • Reviewer, Elsevier Information Processing & Management, Special Issue in Fairness in IR
  • PC Member, FAccT 2021
  • Track Chair 'Ethics in AI', Informatik 2020
  • PC Member, SIGIR 2020
  • PC Member, BIAS 2020
  • PC Member, FACTS-IR 2019
  • Reviewer, EDBT 2019
  • Reviewer FAT* 2019
  • Academic Senate Member, Faculty Board Member. 2017 - 2019. TU Berlin
  • Appointment Committee Member. 2017 - 2018. TU Berlin

Teaching and Supervision

Lecturer
  • (2018 - 2019) Practical Project for Master Students, TU Berlin
  • (2017 - 2018) Practical Course for Bachelor Students, TU Berlin
Master Theses Supervision
  • Michal Jirku: Algorithmic Fairness Development in a Competitive Setting
  • Frederic Mauss: Creating a gender-specific data set about the users of StackOverflow
  • Stefanie Quach: Extending the DELTR Algorithm to Multinomial Use Cases
Bachelor Theses Supervision
  • Flora Muscinelli: Mapping Algorithmic Fairness is Contextual
  • Tom Sühr: Two-Sided Fairness for Repeated Matchings in Two-Sided Markets
  • Jan Steffen Preßler: A Data Collection to Develop Fair Machine Learning Algorithms
  • Gina-Theresa Diehn: FA*IR as Pre-Processing Fair Machine Learning Approach
  • Simon Huber: Generating Discriminatory Datasets by Usage of Wasserstein Generative Adversarial Networks
  • Laura Mons: Benchmarking for Fair Machine Learning Algorithms
  • Hyerim Hwang: Extension of the FA*IR Top-k Ranking Algorithm to Multinomial Use Cases

Interests

  • Algorithmic Fairness
  • Discrimination and Exploitation
  • Political Philosophy
  • Machine Learning

Education

Languages

  • German (Native)
  • English (Professional)
  • French (Advanced)
  • Russian (Advanced)
-->