Application-oriented estimator selection

Dimitrios Katselis, Cristian R. Rojas

Research output: Contribution to journalArticlepeer-review

Abstract

Designing the optimal experiment for the recovery of an unknown system with respect to the end performance metric of interest is a recently established practice in the system identification literature. This practice leads to superior end performance to designing the experiment with respect to some generic metric quantifying the distance of the estimated model from the true one. This is usually done by choosing and fixing the estimation method to either a standard maximum likelihood (ML) or a Bayesian estimator. In this paper, we pose the intuitive question: Can we design better estimators than the usual ones with respect to an end performance metric of interest? Based on a simple linear regression example we affirmatively answer this question.

Original languageEnglish (US)
Article number6926736
Pages (from-to)489-493
Number of pages5
JournalIEEE Signal Processing Letters
Volume22
Issue number4
DOIs
StatePublished - Apr 1 2015

Keywords

  • End performance metric
  • estimation
  • experiment design
  • training

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Application-oriented estimator selection'. Together they form a unique fingerprint.

Cite this