Abstract
We present results for automated text categorization of the Reuters-810000 collection of news stories. Our experiments use the entire one-year collection of 810,000 stories and the entire subject index. We divide the data into monthly groups and provide an initial benchmark of text categorization performance on the complete collection. Experimental results show that efficient sparse-feature implementations of linear methods and decision trees, using a global unstemmed dictionary, can readily handle applications of this size. Predictive performance is approximately as strong as the best results for the much smaller older Reuters collections. Detailed results are provided over time periods. It is shown that a smaller time horizon does not appreciably diminish predictive quality, implying reduced demands for retraining when sample size is large.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 209-221 |
| Number of pages | 13 |
| Journal | Information Processing and Management |
| Volume | 40 |
| Issue number | 2 |
| DOIs | |
| State | Published - Mar 2004 |
| Externally published | Yes |
Keywords
- Benchmark
- Classification of very large corpora
- Machine learning
- Scalability
- Text categorization
ASJC Scopus subject areas
- Information Systems
- Media Technology
- Computer Science Applications
- Management Science and Operations Research
- Library and Information Sciences