Conference Proceedings
Further investigation into reference bias in monolingual evaluation of machine translation
Q Ma, Y Graham, T Baldwin, Q Liu
EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings | Published : 2017
DOI: 10.18653/v1/d17-1262
Abstract
Monolingual evaluation of Machine Translation (MT) aims to simplify human assessment by requiring assessors to compare the meaning of the MT output with a reference translation, opening up the task to a much larger pool of genuinely qualified evaluators. Monolingual evaluation runs the risk, however, of bias in favour of MT systems that happen to produce translations superficially similar to the reference and, consistent with this intuition, previous investigations have concluded monolingual assessment to be strongly biased in this respect. On re-examination of past analyses, we identify a series of potential analytical errors that force some important questions to be raised about the reliab..
View full abstract