Abstract
The dynamic nature of the Web highlights the scalability limitations of universal search engines. Topic driven crawlers can address the problem by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be visited. Here we focus on the balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We investigate the issue for two different tasks: (i) seeking new relevant pages starting from a known relevant subset, and (ii) seeking relevant pages starting a few links away from the relevant subset. Using a framework and a number of quality metrics developed to evaluate topic driven crawling algorithms in a fair way, we find that a mix of exploitation and exploration is essential for both tasks, in spite of a penalty in the early stage of the crawl.
Original language | English (US) |
---|---|
Pages (from-to) | 88-97 |
Number of pages | 10 |
Journal | CEUR Workshop Proceedings |
Volume | 702 |
State | Published - 2002 |
Externally published | Yes |
Event | 2nd International Workshop on Web Dynamics, WebDyn 2002, in Conjunction with the 11th International World Wide Web Conference - Honululu, HI, United States Duration: May 7 2002 → May 7 2002 |
ASJC Scopus subject areas
- General Computer Science