Conference Proceedings

Automatic evaluation of topic coherence

D Newman, JH Lau, K Grieser, T Baldwin

Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media | Published : 2010

Abstract

This paper introduces the novel task of topic coherence evaluation, whereby a set of words, as generated by a topic model, is rated for coherence or interpretability. We apply a range of topic scoring models to the evaluation task, drawing on WordNet, Wikipedia and the Google search engine, and existing research on lexical similarity/relatedness. In comparison with human scores for a set of learned topics over two distinct datasets, we show a simple co-occurrence measure based on point-wise mutual information over Wikipedia data is able to achieve results for the task at or nearing the level of inter-annotator correlation, and that other Wikipedia-based lexical relatedness methods also achie..

View full abstract

Citation metrics