TY - GEN
T1 - A visual annotation framework using common-sensical and linguistic relationships for semantic media retrieval
AU - Shevade, Bageshree
AU - Sundaram, Hari
PY - 2006
Y1 - 2006
N2 - In this paper, we present a novel image annotation approach with an emphasis on - (a) common sense based semantic propagation, (b) visual annotation interfaces and (c) novel evaluation schemes. The annotation system is interactive, intuitive and real-time. We attempt to propagate semantics of the annotations, by using WordNet and ConceptNet, and low-level features extracted from the images. We introduce novel semantic dissimilarity measures, and propagation frameworks. We develop a novel visual annotation interface that allows a user to group images by creating visual concepts using direct manipulation metaphors without manual annotation. We also develop a new evaluation technique for annotation that is based on relationship between concepts based on commonsensical relationships. Our Experimental results on three different datasets, indicate that the annotation system performs very well. The semantic propagation results are good - we converge close to the semantics of the image by annotating a small number (-16.8%) of database images.
AB - In this paper, we present a novel image annotation approach with an emphasis on - (a) common sense based semantic propagation, (b) visual annotation interfaces and (c) novel evaluation schemes. The annotation system is interactive, intuitive and real-time. We attempt to propagate semantics of the annotations, by using WordNet and ConceptNet, and low-level features extracted from the images. We introduce novel semantic dissimilarity measures, and propagation frameworks. We develop a novel visual annotation interface that allows a user to group images by creating visual concepts using direct manipulation metaphors without manual annotation. We also develop a new evaluation technique for annotation that is based on relationship between concepts based on commonsensical relationships. Our Experimental results on three different datasets, indicate that the annotation system performs very well. The semantic propagation results are good - we converge close to the semantics of the image by annotating a small number (-16.8%) of database images.
UR - http://www.scopus.com/inward/record.url?scp=33745522175&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33745522175&partnerID=8YFLogxK
U2 - 10.1007/11670834_20
DO - 10.1007/11670834_20
M3 - Conference contribution
AN - SCOPUS:33745522175
SN - 3540321748
SN - 9783540321743
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 251
EP - 265
BT - Adaptive Multimedia Retrieval
T2 - 3rd International Workshop on Adaptive Multimedia Retrieval: User, Context, and Feedback, AMR 2005
Y2 - 28 July 2005 through 29 July 2005
ER -