TY - GEN
T1 - Automatic video annotation by mining speech transcripts
AU - Velivelli, Atulya
AU - Huang, Thomas S.
PY - 2006/12/21
Y1 - 2006/12/21
N2 - We describe a model for automatic prediction of text annotations for video data. The speech transcripts of videos, are clustered using an aspect model and keywords are extracted based on aspect distribution. Thus we capture the semantic information available in the video data. This technique for automatic keyword vocabulary construction makes the labelling of video data a very easy task. We then build a video shot vocabulary by utilizing both static images and motion cues. We use a maximum entropy criterion to learn the conditional exponential model by defining constraint features over the shot vocabulary, keyword vocabulary combinations. Our method uses a maximum a posteriori estimate of exponential model to predict the annotations. We evaluate the ability of our model to predict annotations, in terms of mean negative log-likelihood and retrieval performance on the test set. A comparison of exponential model with baseline methods indicates that the results are encouraging.
AB - We describe a model for automatic prediction of text annotations for video data. The speech transcripts of videos, are clustered using an aspect model and keywords are extracted based on aspect distribution. Thus we capture the semantic information available in the video data. This technique for automatic keyword vocabulary construction makes the labelling of video data a very easy task. We then build a video shot vocabulary by utilizing both static images and motion cues. We use a maximum entropy criterion to learn the conditional exponential model by defining constraint features over the shot vocabulary, keyword vocabulary combinations. Our method uses a maximum a posteriori estimate of exponential model to predict the annotations. We evaluate the ability of our model to predict annotations, in terms of mean negative log-likelihood and retrieval performance on the test set. A comparison of exponential model with baseline methods indicates that the results are encouraging.
UR - http://www.scopus.com/inward/record.url?scp=33845536019&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33845536019&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2006.39
DO - 10.1109/CVPRW.2006.39
M3 - Conference contribution
AN - SCOPUS:33845536019
SN - 0769526462
SN - 9780769526461
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
BT - 2006 Conference on Computer Vision and Pattern Recognition Workshop
T2 - 2006 Conference on Computer Vision and Pattern Recognition Workshops
Y2 - 17 June 2006 through 22 June 2006
ER -