TY - JOUR
T1 - Bayesian Two-Sample Hypothesis Testing Using the Uncertain Likelihood Ratio
T2 - Improving the Generalized Likelihood Ratio Test
AU - Hare, James Z.
AU - Liang, Yuchen
AU - Kaplan, Lance M.
AU - Veeravalli, Venugopal V.
N1 - This work was supported in part by DEVCOM Army Research Laboratory under Cooperative Agreement W911NF-17-2-0196 through the University of Illinois at Urbana-Champaign.
PY - 2025
Y1 - 2025
N2 - Two-sample hypothesis testing is a common practice in many fields of science, where the goal is to identify whether a set of observations and a set of training data are drawn from the same distribution. Traditionally, this is achieved using parametric and non-parametric frequentist tests, such as the Generalized Likelihood Ratio (GLR) test. However, these tests are not optimal in a Neyman-Pearson sense, especially when the number of observations and training samples are finite. Therefore, in this work, we study a parametric Bayesian test, called the Uncertain Likelihood Ratio (ULR) test, and compare its performance to the traditional GLR test. We establish that the ULR test is the optimal test for any number of samples when the parameters of the likelihood models are drawn from the true prior distribution. We then study an asymptotic form of the ULR test statistic and compare it against the GLR test statistic. As a byproduct of this analysis, we establish a new asymptotic optimality property for the GLR test when the parameters of the likelihood models are drawn from the Jeffreys prior. Furthermore, we analyze conditions under which the ULR test outperforms the GLR test, and include a numerical study to validate the results.
AB - Two-sample hypothesis testing is a common practice in many fields of science, where the goal is to identify whether a set of observations and a set of training data are drawn from the same distribution. Traditionally, this is achieved using parametric and non-parametric frequentist tests, such as the Generalized Likelihood Ratio (GLR) test. However, these tests are not optimal in a Neyman-Pearson sense, especially when the number of observations and training samples are finite. Therefore, in this work, we study a parametric Bayesian test, called the Uncertain Likelihood Ratio (ULR) test, and compare its performance to the traditional GLR test. We establish that the ULR test is the optimal test for any number of samples when the parameters of the likelihood models are drawn from the true prior distribution. We then study an asymptotic form of the ULR test statistic and compare it against the GLR test statistic. As a byproduct of this analysis, we establish a new asymptotic optimality property for the GLR test when the parameters of the likelihood models are drawn from the Jeffreys prior. Furthermore, we analyze conditions under which the ULR test outperforms the GLR test, and include a numerical study to validate the results.
KW - Empirically observed statistics
KW - generalized likelihood ratio test
KW - two-sample hypothesis testing
KW - uncertain likelihood ratio test
KW - uncertainty analysis
UR - https://www.scopus.com/pages/publications/105002051354
UR - https://www.scopus.com/pages/publications/105002051354#tab=citedBy
U2 - 10.1109/TSP.2025.3546169
DO - 10.1109/TSP.2025.3546169
M3 - Article
AN - SCOPUS:105002051354
SN - 1053-587X
VL - 73
SP - 1410
EP - 1425
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -