Abstract
In this paper, we consider the problem of constructing wrappers for web information extraction that are robust to changes in websites. We consider two models to study robustness formally: the adversarial model, where we look at the worst-case robustness of wrappers, and probabilistic model, where we look at the expected robustness of wrappers, as web-pages evolve. Under both models, we present efficient algorithms for constructing the provably most robust wrapper. By evaluating on real websites, we demonstrate that in practice, our algorithms are highly effective in coping up with changes in websites, and reduce the wrapper breakage by up to 500% over existing techniques.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 980-991 |
| Number of pages | 12 |
| Journal | Proceedings of the VLDB Endowment |
| Volume | 4 |
| Issue number | 11 |
| DOIs | |
| State | Published - Aug 2011 |
| Event | 37th International Conference on Very Large Data Bases, VLDB 2011 - Seattle, United States Duration: Aug 29 2011 → Sep 3 2011 |
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- General Computer Science
Fingerprint
Dive into the research topics of 'Optimal schemes for robust web extraction'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS