Crawling the Web

Gautam Pant, Padmini Srinivasan, Filippo Menczer

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

The large size and the dynamic nature of the Web make it necessary to continually maintain Web based information retrieval systems. Crawlers facilitate this process by following hyperlinks in Web pages to automatically download new and updated Web pages. While some systems rely on crawlers that exhaustively crawl the Web, others incorporate “focus” within their crawlers to harvest application- or topic-specific collections. In this chapter we discuss the basic issues related to developing an infrastructure for crawlers. This is followed by a review of several topical crawling algorithms, and evaluation metrics that may be used to judge their performance. Given that many innovative applications of Web crawling are still being invented, we briefly discuss some that have already been developed.
Original languageEnglish (US)
Title of host publicationWeb Dynamics
EditorsMark Levene, Alexandra Poulovassilis
PublisherSpringer
Chapter7
Pages153-177
ISBN (Electronic)978366210841
ISBN (Print)9783642073779
DOIs
StatePublished - 2004
Externally publishedYes

Keywords

  • Priority Queue
  • Context Graph
  • Anchor Text
  • Relevance Score
  • Cosine Similarity

Fingerprint

Dive into the research topics of 'Crawling the Web'. Together they form a unique fingerprint.

Cite this