Universal context tree least squares prediction

Andrew Carl Singer, Suleyman S. Kozat

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We investigate the problem of sequential prediction of individual sequences using a competitive algorithm approach. We have previously developed prediction algorithms that are universal with respect to the class of all linear predictors, such that the prediction algorithm competes against a continuous class of prediction algorithms, under the square error loss. In this paper, we introduce the use of a "context tree," to compete against a doubly exponential number of piecewise linear models. We use the context tree to achieve the performance of the best piecewise linear model that can choose its partition of the real line and real-valued prediction parameters, based on observing the entire sequence in advance, for the square error loss, uniformly, for any individual sequence. This performance is achieved with a prediction algorithm whose complexity is only linear in the depth of the context tree.

Original languageEnglish (US)
Title of host publicationProceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006
Pages426-430
Number of pages5
DOIs
StatePublished - Dec 1 2006
Event2006 IEEE International Symposium on Information Theory, ISIT 2006 - Seattle, WA, United States
Duration: Jul 9 2006Jul 14 2006

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
ISSN (Print)2157-8101

Other

Other2006 IEEE International Symposium on Information Theory, ISIT 2006
CountryUnited States
CitySeattle, WA
Period7/9/067/14/06

Fingerprint

Least Squares
Prediction
Piecewise Linear
Linear Model
Algorithm Complexity
Real Line
Context
Predictors
Choose
Partition
Entire
Class

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Cite this

Singer, A. C., & Kozat, S. S. (2006). Universal context tree least squares prediction. In Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006 (pp. 426-430). [4035996] (IEEE International Symposium on Information Theory - Proceedings). https://doi.org/10.1109/ISIT.2006.261704

Universal context tree least squares prediction. / Singer, Andrew Carl; Kozat, Suleyman S.

Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006. 2006. p. 426-430 4035996 (IEEE International Symposium on Information Theory - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Singer, AC & Kozat, SS 2006, Universal context tree least squares prediction. in Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006., 4035996, IEEE International Symposium on Information Theory - Proceedings, pp. 426-430, 2006 IEEE International Symposium on Information Theory, ISIT 2006, Seattle, WA, United States, 7/9/06. https://doi.org/10.1109/ISIT.2006.261704
Singer AC, Kozat SS. Universal context tree least squares prediction. In Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006. 2006. p. 426-430. 4035996. (IEEE International Symposium on Information Theory - Proceedings). https://doi.org/10.1109/ISIT.2006.261704
Singer, Andrew Carl ; Kozat, Suleyman S. / Universal context tree least squares prediction. Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006. 2006. pp. 426-430 (IEEE International Symposium on Information Theory - Proceedings).
@inproceedings{27f3e8eaa8a54667b35fce664648838b,
title = "Universal context tree least squares prediction",
abstract = "We investigate the problem of sequential prediction of individual sequences using a competitive algorithm approach. We have previously developed prediction algorithms that are universal with respect to the class of all linear predictors, such that the prediction algorithm competes against a continuous class of prediction algorithms, under the square error loss. In this paper, we introduce the use of a {"}context tree,{"} to compete against a doubly exponential number of piecewise linear models. We use the context tree to achieve the performance of the best piecewise linear model that can choose its partition of the real line and real-valued prediction parameters, based on observing the entire sequence in advance, for the square error loss, uniformly, for any individual sequence. This performance is achieved with a prediction algorithm whose complexity is only linear in the depth of the context tree.",
author = "Singer, {Andrew Carl} and Kozat, {Suleyman S.}",
year = "2006",
month = "12",
day = "1",
doi = "10.1109/ISIT.2006.261704",
language = "English (US)",
isbn = "1424405041",
series = "IEEE International Symposium on Information Theory - Proceedings",
pages = "426--430",
booktitle = "Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006",

}

TY - GEN

T1 - Universal context tree least squares prediction

AU - Singer, Andrew Carl

AU - Kozat, Suleyman S.

PY - 2006/12/1

Y1 - 2006/12/1

N2 - We investigate the problem of sequential prediction of individual sequences using a competitive algorithm approach. We have previously developed prediction algorithms that are universal with respect to the class of all linear predictors, such that the prediction algorithm competes against a continuous class of prediction algorithms, under the square error loss. In this paper, we introduce the use of a "context tree," to compete against a doubly exponential number of piecewise linear models. We use the context tree to achieve the performance of the best piecewise linear model that can choose its partition of the real line and real-valued prediction parameters, based on observing the entire sequence in advance, for the square error loss, uniformly, for any individual sequence. This performance is achieved with a prediction algorithm whose complexity is only linear in the depth of the context tree.

AB - We investigate the problem of sequential prediction of individual sequences using a competitive algorithm approach. We have previously developed prediction algorithms that are universal with respect to the class of all linear predictors, such that the prediction algorithm competes against a continuous class of prediction algorithms, under the square error loss. In this paper, we introduce the use of a "context tree," to compete against a doubly exponential number of piecewise linear models. We use the context tree to achieve the performance of the best piecewise linear model that can choose its partition of the real line and real-valued prediction parameters, based on observing the entire sequence in advance, for the square error loss, uniformly, for any individual sequence. This performance is achieved with a prediction algorithm whose complexity is only linear in the depth of the context tree.

UR - http://www.scopus.com/inward/record.url?scp=39049154885&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=39049154885&partnerID=8YFLogxK

U2 - 10.1109/ISIT.2006.261704

DO - 10.1109/ISIT.2006.261704

M3 - Conference contribution

AN - SCOPUS:39049154885

SN - 1424405041

SN - 9781424405046

T3 - IEEE International Symposium on Information Theory - Proceedings

SP - 426

EP - 430

BT - Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006

ER -