Recent Academic Papers
- Caliskan et al. (2022): Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics.
- Savoldi et al. (2021): Gender Bias in Machine Translation
- Stanovsky et al. (2019): Evaluating Gender Bias in Machine Translation
- Shah et al. (2020): Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
- Saunders and Byrne (2020): Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem
- Hovy et al. (2020): “You Sound Just Like Your Father” Commercial Machine Translation Systems Include Stylistic Biases
- Basta et al. (2020): Towards Mitigating Gender Bias in a decoder-based Neural Machine Translation model by Adding Contextual Information.
- Zhao et al. (2018): Learning gender-neutral word embeddings
- Vanmassenhove et al. (2018): Getting Gender Right in Neural Machine Translation
- Further academic papers
- Bolukbasi et al. (2016): Man is to computer programmer as woman is to homemaker?
- Gonen and Goldberg (2019): Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Bias in Word Embeddings But do not Remove Them
Recent Proposed Solutions
- Analysis of gender bias manifested in word embeddings
- Annotating data with speakers' gender
- Creating challenge sets to test gender bias in NMT outputs
- Analysing correlations between pronoun gender and professions, both inter-sentential and intra-sentential