The main task of an Information Retrieval (IR) system is to return relevant documents to users in response to a query. However, this task is inherently an empirical task since the definition of relevance of a document to a query is notwell defined, and in general, can only be judged by users who issued the query. Yet, defining relevance as rigorously as we can is essential to the development of both effective IR systems and sound evaluation metrics. As a result, modeling relevance has always been a central challenge in IR research for both retrieval model development and evaluation. Indeed, all information retrieval models developed so far (which are the basis of the algorithms used in all search engine applications) has explicitly or implicitly adopted one way or another to formalize the vague concept of relevance; similarly, all evaluation metrics of IR are meant to quantify the utility of the retrieved results from a user's perspective, and thus must also accurately reflect a user's view of relevance.