TY - GEN
T1 - Axiomatic thinking for information retrieval-and related tasks
AU - Amigo, Enrique
AU - Fang, Hui
AU - Mizzaro, Stefano
AU - Zhai, Chengxiang
N1 - Publisher Copyright:
© 2017 ACM.
PY - 2017/8/7
Y1 - 2017/8/7
N2 - The main task of an Information Retrieval (IR) system is to return relevant documents to users in response to a query. However, this task is inherently an empirical task since the definition of relevance of a document to a query is notwell defined, and in general, can only be judged by users who issued the query. Yet, defining relevance as rigorously as we can is essential to the development of both effective IR systems and sound evaluation metrics. As a result, modeling relevance has always been a central challenge in IR research for both retrieval model development and evaluation. Indeed, all information retrieval models developed so far (which are the basis of the algorithms used in all search engine applications) has explicitly or implicitly adopted one way or another to formalize the vague concept of relevance; similarly, all evaluation metrics of IR are meant to quantify the utility of the retrieved results from a user's perspective, and thus must also accurately reflect a user's view of relevance.
AB - The main task of an Information Retrieval (IR) system is to return relevant documents to users in response to a query. However, this task is inherently an empirical task since the definition of relevance of a document to a query is notwell defined, and in general, can only be judged by users who issued the query. Yet, defining relevance as rigorously as we can is essential to the development of both effective IR systems and sound evaluation metrics. As a result, modeling relevance has always been a central challenge in IR research for both retrieval model development and evaluation. Indeed, all information retrieval models developed so far (which are the basis of the algorithms used in all search engine applications) has explicitly or implicitly adopted one way or another to formalize the vague concept of relevance; similarly, all evaluation metrics of IR are meant to quantify the utility of the retrieved results from a user's perspective, and thus must also accurately reflect a user's view of relevance.
UR - http://www.scopus.com/inward/record.url?scp=85029397091&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85029397091&partnerID=8YFLogxK
U2 - 10.1145/3077136.3084369
DO - 10.1145/3077136.3084369
M3 - Conference contribution
AN - SCOPUS:85029397091
T3 - SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
SP - 1419
EP - 1420
BT - SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
PB - Association for Computing Machinery
T2 - 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017
Y2 - 7 August 2017 through 11 August 2017
ER -