TY - GEN
T1 - Building a dictionary of image fragments
AU - Liao, Zicheng
AU - Farhadi, Ali
AU - Wang, Yang
AU - Endres, Ian
AU - Forsyth, David
PY - 2012
Y1 - 2012
N2 - We show how to build large dictionaries of meaningful image fragments. These fragments could represent objects, objects in a local context, or parts of scenes. Our fragments operate as region-based exemplars, and we show how they can be used for image classification, to localize objects, and to compose new images. While each of these activities has been demonstrated before, each has required manually extracted fragments. Because our method for fragment extraction is automatic it can operate at a large scale. Our method uses recent advances in generic object detection techniques, together with discriminative tests to obtain good, clean fragment sets with extensive diversity. Our fragments are organized by the tags of the source images to build a semantically organized fragment table. A good set of fragment exemplars describes only the object, rather than objectcontext. Context could help identify an object; but it could also contribute noise, because other objects might appear in the same context. We show a slight improvement in classification performance by two standard exemplar matching methods using our fragment dictionary over such methods using image exemplars. This suggests that knowing the support of an exemplar is valuable. Furthermore, we demonstrate our automatically built fragment dictionary is capable of good localization. Finally, our fragment dictionary supports a keyword based fragment search system, which allows artists to get the fragments they need to make image collages.
AB - We show how to build large dictionaries of meaningful image fragments. These fragments could represent objects, objects in a local context, or parts of scenes. Our fragments operate as region-based exemplars, and we show how they can be used for image classification, to localize objects, and to compose new images. While each of these activities has been demonstrated before, each has required manually extracted fragments. Because our method for fragment extraction is automatic it can operate at a large scale. Our method uses recent advances in generic object detection techniques, together with discriminative tests to obtain good, clean fragment sets with extensive diversity. Our fragments are organized by the tags of the source images to build a semantically organized fragment table. A good set of fragment exemplars describes only the object, rather than objectcontext. Context could help identify an object; but it could also contribute noise, because other objects might appear in the same context. We show a slight improvement in classification performance by two standard exemplar matching methods using our fragment dictionary over such methods using image exemplars. This suggests that knowing the support of an exemplar is valuable. Furthermore, we demonstrate our automatically built fragment dictionary is capable of good localization. Finally, our fragment dictionary supports a keyword based fragment search system, which allows artists to get the fragments they need to make image collages.
UR - http://www.scopus.com/inward/record.url?scp=84866706755&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84866706755&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2012.6248085
DO - 10.1109/CVPR.2012.6248085
M3 - Conference contribution
AN - SCOPUS:84866706755
SN - 9781467312264
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 3442
EP - 3449
BT - 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
T2 - 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
Y2 - 16 June 2012 through 21 June 2012
ER -