Skip to content

Latest commit

 

History

History
9 lines (6 loc) · 1.27 KB

README.md

File metadata and controls

9 lines (6 loc) · 1.27 KB

Cornell Bias Labs

Analyzing identity biases with(in) machine learning and artificially intelligent systems. Sponsored by MunichRE.

Fall 2022 : Bias in Machine Translation

Recently, Google received criticism for Google Translate's gender bias in its English-to-Spanish translation. Users observed that for occupations (e.g. doctor, scientist), for example, the model only offered the male-gendered version of the translation. This news revealed underlying biases within language translation models. To expand on this, project analyzes gender bias in mainstream machine translation models -- Google, Amazon, and Microsoft -- through 800+ English-Arabic and English-Spanish translation of job occupations as listed by the U.S. National Labor Bureau. Further, we perform data analysis on the gender of each respective input-output translation across the 3 models; and we analyze whether the translations reflect the Labor Bureau data on the gender-domination of each occupation.

Interested in learning more about our project? Here is our final showcase presentation.

Members: Imani Finkley, Rahma Tasnim, Salma Hazimeh, Srisha Gaur, Nada Attia, Mena Attia.