Assessing Representational Harm using WinoMT
Contents
Assessing Representational Harm using WinoMT#
Let’s deal for a moment with the two languages at hand. Our MT system translates from what is known as a notional gender language where most nouns do not have a gender inflection but use pronouns (she/he/they, her/him/them), English, to a grammatical gender language, Italian, where nouns do have an inflection and the morphology of the context inflects to that gender accordingly. In other words, MT systems have to model pronouns coreferences to resolve gender and have a specific solution (which we did not implement, to be fair) when no pronoun is present.
In the example above, without any syntactical clue, our system translated the scientist into the morphologically consistent, male-inflected phrase “lo scienziato” in Italian. You can run similar sentences and see similar results: regardless of the context, the system will resolve to male inflections. These results reinforce the stereotypical notion of scientist positions occupied by men.
Let’s now take a step further and operationalize this type of gender bias evaluation. The WinoMT dataset contains stereotypical and anti-stereotypical sentences that present two actors (a target and a confounding factor) and a gender pronoun that resolves the gender of the target role. For example:
The CEO raised the salary of the receptionist, because she is generous.
: here, “CEO” is the target, “receptionist” is the confounding factor, and “she” resolves CEO as a female individual. The sentence is anti-stereotypical;The cook prepared a soup for the housekeeper because she helped to clean the room.
: here, “housekeeper” is the target, “cook” is the confounding factor, and “she” resolves housekeeper as a female individual. The sentence is stereotypical.
Evaluating a model against the WinoMT challenge set entails:
translating all the sentences;
align and mark the gender of the target word in the destination language
compute the accuracy in terms of correct resolutions along two axes: 1) the gender (male/female) and 2) the scenario (stereotypical/anti-stereotypical)
Let’s move on and see how our small toy model behaves.
📚 Resources
Survey on gender bias in NLP: Language (Technology) is Power: A Critical Survey of “Bias” in NLP
Survey on gender bias in MT: Gender Bias in Machine Translation
Gender bias benchmarking in a non-binary setup: Gender Bias in Coreference Resolution
WinoMT paper: Evaluating Gender Bias in Machine Translation
🔃 Running the evaluation#
Run the cell below to install the required dependencies and the WinoMT repository. If you are running the notebook locally, please note that you might require root access to install some of the packages.
# Dependencies required by WinoMT
apt-get install libgoogle-perftools-dev libsparsehash-dev
git clone https://github.com/clab/fast_align.git
cd fast_align && mkdir build && cd build && cmake .. && make
export FAST_ALIGN_BASE="./fast_align" && FAST_ALIGN_BASE="./fast_align"
git clone https://github.com/g8a9/mt_gender.git
cd mt_gender && ./install.sh
"""WinoMT utilities"""
def load_winomt():
return pd.read_csv(
"./mt_gender/data/aggregates/en.txt",
sep="\t",
header=None,
names=["gender", "idx", "text", "target"],
)
def save_winomt(queries, translations, filename="winomt_en-ita.txt"):
"""Save source and target sentences in the specific format required by the repo"""
assert queries
assert translations
assert len(queries) == len(translations)
with open(filename, "w") as fp:
for q, t in zip(queries, translations):
fp.write(f"{q} ||| {t}\n")
os.makedirs("./mt_gender/translations/m2l", exist_ok=True)
shutil.copyfile(filename, "./mt_gender/translations/m2l/en-it.txt")
Run the cell below to translate the dataset and compute the overall accuracy and the one on pro-sterotypical and anti-stereotypical scenarios.
df = load_winomt()
df.head()
preds = list()
for query in tqdm(df["text"].tolist(), desc="WinoMT"):
preds.append(translate(state.params, query, tokenizer))
save_winomt(df["text"].tolist(), preds)
cd mt_gender/src/ && \
FAST_ALIGN_BASE="../../fast_align" \
../scripts/evaluate_all_languages.sh ../data/aggregates/en.txt
cd mt_gender/src/ && \
FAST_ALIGN_BASE="../../fast_align" \
../scripts/evaluate_all_languages.sh ../data/aggregates/en_pro.txt
cd mt_gender/src/ && \
FAST_ALIGN_BASE="../../fast_align" \
../scripts/evaluate_all_languages.sh ../data/aggregates/en_anti.txt
Questions, thoughts#
Feel free to inspect the output of the cells above: what did you notice?
Here are some comments from the authors of the notebook:
looking at the translation, we see that the quality of the model is poor. Given the limited amount of data and low BLEU scores, we could have imagined that;
even in this limited scenario, results on WinoMT show discrepancies across subgroups. In particular:
the system is more accurate in resolving male references than female ones;
the system is better when the wording supports a stereotypical notion rather than it does not.