logo

Learn

About DeBiasByUs

The goal of DeBiasByUs is to create a database of example cases of machine translation bias and to help shape a more gender-fair society by raising public awareness about and providing information on the issues of gender bias in machine translation.

The database is a collaborative, community-driven resource. While everyone is welcome to contribute, we particularly welcome submissions from members of communities that are more likely to suffer from gender bias in society.

Our forum is aimed to further spur discussion on cases of bias in MT on how this affects users of MT systems and potentially how these occurrences could be avoided.

How Does Your Contribution to This Project Help?

Machine translation systems learn from the users’ input. If biased translations are accepted by the user or even rated as “good” (as it is possible on some web applications), this ultimately serves to reinforce gender bias. We need to understand where gender bias is located in MT translation and to do so, we need real data from real users encountering real instances of gender bias ‘in the wild’.

There are some datasets that have been created to study gender bias in MT, but those datasets are often too generic or too specific, and they have been created on the basis of what the developers consider to be bias. We want the voice of the community to be heard.

We need to learn about gender bias in MT for a variety of languages and systems:
    Current datasets are often limited to specific languages (frequently starting from English as a basis), but if we want truly gender-fair MT, we need to imagine what that would look like across all languages.
We need our systems to change and evolve as society changes:
    Gender and society are not static. As gender identities and roles evolve, so should our language. When you tell us about bias in MT and how you would correct it, we can see this evolution happen in real time.
    The power of language should belong to the people, not the system. Your contribution helps empower people.
We need to inform developers & translators on the impact of bias in translation, and how MT bias can be solved:
    Your suggestions can inform strategies for post-editing and ensure a future with more gender-inclusive and gender-fair language.
    Machine learning techniques can be used in the future to uncover patterns in the data that might shed light on underlying issues in MT & society and inspire future MT developers to do better.

Who We Are

The DeBiasByUs team consists of Joke Daems (they/them), assistant professor human-computer interaction in empirical translation & interpreting studies at Ghent University, and Janiça Hackenbuchner (she/her), doctoral researcher at Ghent University.

The original idea for the project was conceived during the Artificially Correct Hackathon organised by the Goethe Institut. The initial development of the website and Bias Shield plugin was also sponsored by them. We are grateful to the Goethe Institut and the original Hackaton team members who helped bring this idea to life: Bhargavi Mahesh, Shrishti Mohabey and Bettina Koch. In this video, the original DeBiasByUs team presents the project in an interview by the Goethe Institute following the hackathon win.

This blog article further describes in laymans terms how the DeBiasByUs platform originated at the Artificially Correct Hackathon.

The DeBiasByUs project is being developed and hosted by Ghent University, special thanks to Michaël Lumingu and Stien Stessens.

Goethe InstitutGhent University

Conferences

DeBiasByUs played a key role in the 1st international Workshop on Gender-Inclusive Translation Technologies (GITT). This workshop was hosted at the EAMT 2023 in June in Tampere, Finland. Information on all papers can be found in the workshop proceedings.

DeBiasByUs was initially presented during a poster session at the EAMT 2022 Conference in Ghent, Belgium. The abstract can be read in the conference proceedings in ACL Anthology.

EAMT 2022
Joke Daems (r) and Janiça Hackenbuchner (l)
presenting DeBiasByUs at EAMT2022