TY - GEN
T1 - Prediction of Head-Neck Cancer Recurrence from Pet/CT Images with Havrda-Charvat Entropy
AU - Brochet, Thibaud
AU - Lapuyade-Lahorgue, Jerome
AU - Li, Hua
AU - Vera, Pierre
AU - Decazes, Pierre
AU - Ruan, Su
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - This paper proposes a loss function based on Havrda-Charvat entropy in deep neural networks for outcome prediction in head-neck cancers. Havrda-Charvat is a parameterized cross-entropy which generalizes the classical Shannon-based cross-entropy. Its parameter denoted α takes its values in ]0, ∞[ and one can recover some usual entropies, for instance Shannon for α = 1 or Gini coefficient for α = 2. In this paper, we propose to use this entropy to predict cancer recurrence by incorporating it in a neural network instead of Shannon's entropy for better adaptability. Our deep network is composed of a double auto-encoder to extract features and a classifier to predire cancer outcome. The experiments are conducted on MICCAI challenge dataset of Head-Neck cancer. The influence of the parameter on the results is studied and an optimal interval of its values is found. A result shows that Havrda-Charvat entropy can achieve better prediction performance than Shannon entropy, which is the most widely used in prediction task nowadays.
AB - This paper proposes a loss function based on Havrda-Charvat entropy in deep neural networks for outcome prediction in head-neck cancers. Havrda-Charvat is a parameterized cross-entropy which generalizes the classical Shannon-based cross-entropy. Its parameter denoted α takes its values in ]0, ∞[ and one can recover some usual entropies, for instance Shannon for α = 1 or Gini coefficient for α = 2. In this paper, we propose to use this entropy to predict cancer recurrence by incorporating it in a neural network instead of Shannon's entropy for better adaptability. Our deep network is composed of a double auto-encoder to extract features and a classifier to predire cancer outcome. The experiments are conducted on MICCAI challenge dataset of Head-Neck cancer. The influence of the parameter on the results is studied and an optimal interval of its values is found. A result shows that Havrda-Charvat entropy can achieve better prediction performance than Shannon entropy, which is the most widely used in prediction task nowadays.
KW - CT images
KW - Deep neural networks
KW - Havrda-Charvat entropy
KW - PET images
KW - Shannon entropy
KW - generalized entropies
KW - head-neck cancer
KW - parameter estimation
KW - recurrence prediction
UR - http://www.scopus.com/inward/record.url?scp=85179554085&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179554085&partnerID=8YFLogxK
U2 - 10.1109/IPTA59101.2023.10320046
DO - 10.1109/IPTA59101.2023.10320046
M3 - Conference contribution
AN - SCOPUS:85179554085
T3 - 2023 12th International Conference on Image Processing Theory, Tools and Applications, IPTA 2023
BT - 2023 12th International Conference on Image Processing Theory, Tools and Applications, IPTA 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th International Conference on Image Processing Theory, Tools and Applications, IPTA 2023
Y2 - 16 October 2023 through 19 October 2023
ER -