Recent Research
Recent Academic Papers
- Further academic papers
- Piergentili et al. (2023): From Inclusive Language to Gender-Neutral Machine Translation
- Caliskan et al. (2022): Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics.
- Savoldi et al. (2021): Gender Bias in Machine Translation
- Basta et al. (2020): Towards Mitigating Gender Bias in a decoder-based Neural Machine Translation model by Adding Contextual Information.
- Saunders and Byrne (2020): Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem
- Hovy et al. (2020): “You Sound Just Like Your Father” Commercial Machine Translation Systems Include Stylistic Biases
- Shah et al. (2020): Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
- Stanovsky et al. (2019): Evaluating Gender Bias in Machine Translation
- Gonen and Goldberg (2019): Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Bias in Word Embeddings But do not Remove Them
- Zhao et al. (2018): Learning gender-neutral word embeddings
- Vanmassenhove et al. (2018): Getting Gender Right in Neural Machine Translation
- Bolukbasi et al. (2016): Man is to computer programmer as woman is to homemaker?
Recent Proposed Solutions
- Analysis of gender bias manifested in word embeddings
- Analysing correlations between pronoun gender and professions, both inter-sentential and intra-sentential
- Creating challenge sets to test gender bias in NMT outputs
- Annotating data with speakers' gender