In this paper, we consider the problem of constructing wrappers for web information extraction that are robust to changes in websites. We consider two models to study robustness formally: the adversarial model, where we look at the worst-case robustness of wrappers, and probabilistic model, where we look at the expected robustness of wrappers, as web-pages evolve. Under both models, we present efficient algorithms for constructing the provably most robust wrapper. By evaluating on real websites, we demonstrate that in practice, our algorithms are highly effective in coping up with changes in websites, and reduce the wrapper breakage by up to 500% over existing techniques.
|Original language||English (US)|
|Number of pages||12|
|Journal||Proceedings of the VLDB Endowment|
|State||Published - Aug 1 2011|
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- Computer Science(all)