Skip to content

Analyzing identity biases with(in) machine learning and artificially intelligent systems. Sponsored by MunichRE.

Notifications You must be signed in to change notification settings

CornellDataScience/Cornell-Bias-Labs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cornell Bias Labs

Analyzing identity biases with(in) machine learning and artificially intelligent systems. Sponsored by MunichRE.

Fall 2022 : Bias in Machine Translation

Recently, Google received criticism for Google Translate's gender bias in its English-to-Spanish translation. Users observed that for occupations (e.g. doctor, scientist), for example, the model only offered the male-gendered version of the translation. This news revealed underlying biases within language translation models. To expand on this, project analyzes gender bias in mainstream machine translation models -- Google, Amazon, and Microsoft -- through 800+ English-Arabic and English-Spanish translation of job occupations as listed by the U.S. National Labor Bureau. Further, we perform data analysis on the gender of each respective input-output translation across the 3 models; and we analyze whether the translations reflect the Labor Bureau data on the gender-domination of each occupation.

Interested in learning more about our project? Here is our final showcase presentation.

Members: Imani Finkley, Rahma Tasnim, Salma Hazimeh, Srisha Gaur, Nada Attia, Mena Attia.

About

Analyzing identity biases with(in) machine learning and artificially intelligent systems. Sponsored by MunichRE.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages