Conference Proceedings

On the Pluses and Minuses of Risk

R Benham, A Moffat, JS Culpepper

15th Asia Information Retrieval Societies Conference, AIRS 2019 | Springer | Published : 2020

Abstract

Evaluating the effectiveness of retrieval models has been a mainstay in the IR community since its inception. Generally speaking, the goal is to provide a rigorous framework to compare the quality of two or more models, and determine which of them is the “better”. However, defining “better” or “best” in this context is not a simple task. Computing the average effectiveness over many queries is the most common approach used in Cranfield-style evaluations. But averages can hide subtle trade-offs in retrieval models – a percentage of the queries may well perform worse than a previous iteration of the model as a result of an optimization to improve some other subset. A growing body of work refer..

View full abstract

Grants

Awarded by Australian Research Council


Citation metrics