Abstract
Relevance feedback is an effective technique for improving search accuracy in interactive information retrieval. In this paper, we study an interesting optimization problem in interactive feedback that aims at optimizing the tradeoff between presenting search results with the highest immediate utility to a user (but not necessarily most useful for collecting feedback information) and presenting search results with the best potential for collecting useful feedback information (but not necessarily the most useful documents from a user's perspective). Optimizing such an exploration-exploitation tradeoff is key to the optimization of the overall utility of relevance feedback to a user in the entire session of relevance feedback. We formally frame this tradeoff as a problem of optimizing the diversification of search results since relevance judgments on more diversified results have been shown to be more useful for relevance feedback. We propose a machine learning approach to adaptively optimizing the diversification of search results for each query so as to optimize the overall utility in an entire session. Experiment results on three representative retrieval test collections show that the proposed learning approach can effectively optimize the exploration-exploitation tradeoff and outperforms the traditional relevance feedback approach which only does exploitation without exploration.
Original language | English (US) |
---|---|
Pages (from-to) | 307-330 |
Number of pages | 24 |
Journal | Information Retrieval |
Volume | 16 |
Issue number | 3 |
DOIs | |
State | Published - Jun 2013 |
Keywords
- Diversification
- Feedback
- Interactive retrieval models
- User modeling
ASJC Scopus subject areas
- Information Systems
- Library and Information Sciences