Thesis / Dissertation

Improving the Reliability and Robustness of Information Retrieval Evaluation

Ziying Yang, Alistair Moffat (ed.), Andrew Turpin (ed.)

Published : 2019

Abstract

Batch evaluation techniques are often used to measure and compare the performance of Information Retrieval (IR) systems. In these approaches, IR evaluation metrics score the systems' runs against a set of ground-truth knowledge represented as relevance judgments for each one of a set of topics. Those system-topic scores are then compared, so that the superior system - if one exists - can be identified. Chapter 2 describes these processes in detail, including defining several commonly-used IR evaluation metrics, and introducing a range of associated techniques for collecting relevance judgments. Chapter 3 considers what happens when the document-scoring model creates ties; that is, when the..

View full abstract

University of Melbourne Researchers