TY - GEN
T1 - Unsupervised co-segmentation of tumor in PET-CT images using belief functions based fusion
AU - Lian, Chunfeng
AU - Li, Hua
AU - Vera, Pierre
AU - Ruan, Su
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/5/23
Y1 - 2018/5/23
N2 - Accurate segmentation of target tumor is a precondition for effective radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practical process of radiation oncology, many existing segmentation methods are still performed in mono-modalities. We propose an automatic 3-D method based on unsupervised learning to jointly delineate tumor contours in PET-CT images, considering that the two distinct modalities can provide each other complementary information so as to improve segmentation. As PET-CT images are noisy and blurry, the theory of belief functions is adopted to model the uncertain and imprecise image information, and to fuse them in a stable way. To ensure reliable clustering in each modality, an adaptive distance metric to quantify distortions is proposed, and the spatial information is taken into account. A novel context term is designed to encourage consistent segmentation between the two modalities. In addition, during the iterative process of unsupervised learning, a specific fusion strategy is applied to further adjust results for the two distinct modalities. The proposed co-segmentation method has been evaluated by fifteen PET-CT images for non-small cell lung cancer (NSCLC) patients, showing good performance compared to some other methods.
AB - Accurate segmentation of target tumor is a precondition for effective radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practical process of radiation oncology, many existing segmentation methods are still performed in mono-modalities. We propose an automatic 3-D method based on unsupervised learning to jointly delineate tumor contours in PET-CT images, considering that the two distinct modalities can provide each other complementary information so as to improve segmentation. As PET-CT images are noisy and blurry, the theory of belief functions is adopted to model the uncertain and imprecise image information, and to fuse them in a stable way. To ensure reliable clustering in each modality, an adaptive distance metric to quantify distortions is proposed, and the spatial information is taken into account. A novel context term is designed to encourage consistent segmentation between the two modalities. In addition, during the iterative process of unsupervised learning, a specific fusion strategy is applied to further adjust results for the two distinct modalities. The proposed co-segmentation method has been evaluated by fifteen PET-CT images for non-small cell lung cancer (NSCLC) patients, showing good performance compared to some other methods.
KW - Belief Functions
KW - Clustering
KW - Information Fusion
KW - PET-CT
KW - Tumor Co-Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85048112294&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048112294&partnerID=8YFLogxK
U2 - 10.1109/ISBI.2018.8363559
DO - 10.1109/ISBI.2018.8363559
M3 - Conference contribution
AN - SCOPUS:85048112294
T3 - Proceedings - International Symposium on Biomedical Imaging
SP - 220
EP - 223
BT - 2018 IEEE 15th International Symposium on Biomedical Imaging, ISBI 2018
PB - IEEE Computer Society
T2 - 15th IEEE International Symposium on Biomedical Imaging, ISBI 2018
Y2 - 4 April 2018 through 7 April 2018
ER -