Abstract
A common problem that arises in adaptive filtering, autoregressive modeling, or linear prediction is the selection of an appropriate order for the underlying linear parametric model. We address this problem and develop a sequential algorithm that has a sequentially accumulated mean squared prediction error that is as good as any linear predictor of order less than some M, where the parameters may he tuned to the data. The linear prediction problem is transformed into one of sequential probability assignment from universal coding theory. In this context, we prove that the algorithm achieves this performance uniformly for every individual sequence at the cost of a model redundancy term, which is at most proportional to n 1 ln(M ) and a parameter redundancy term which is proportional to n 1 ln(ra), where n is the length of the data. Universal prediction and equalization algorithms that use a performanceweighted average of all model orders less than M are presented. Efficient lattice filters are used to generate all of the models recursively, resulting in a complexity of the universal algorithm that is no larger than that of the largest model order. Examples of prediction performance are provided for autoregressive and speech data as well as an example of adaptive data equalization.
Original language | English (US) |
---|---|
Number of pages | 1 |
Journal | IEEE Transactions on Signal Processing |
Volume | 46 |
Issue number | 2 |
State | Published - 1998 |
Externally published | Yes |
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering