Class-based grouping in perspective images

Andrew Zisserman, Joe Mundy, David Alexander Forsyth, Jane Liu, Nic Pillow, Charlie Rothwell, Sven Utcke

Research output: Contribution to conferencePaperpeer-review


In any object recognition system a major and primary task is to associate those image features, within an image of a complex scene, that arise from an individual object. The key idea here is that a geometric class defined in 3D induces relationships in the image which must hold between points on the image outline (the perspective projection of the object). The resulting image constraints enable both identification and grouping of image features belonging to objects of that class. The classes include surfaces of revolution, canal surfaces (pipes) and polyhedra. Recognition proceeds by first recognising an object as belonging to one of the classes (for example a surface of revolution) and subsequently identifying the object (for example as a particular vase). This differs from conventional object recognition systems where recognition is generally targeted at particular objects. These classes also support the computation of 3D invariant descriptions including symmetry axes, canomical coordinate frames and projective signatures. The constraints and grouping methods are viewpoint invariant, and proceed with no information on object pose. We demonstrate the effectiveness of this class-based grouping on real, cluttered scenes using grouping algorithms developed for rotationally symmetric surfaces, canal-surfaces and polyhedra.

Original languageEnglish (US)
Number of pages6
StatePublished - Jan 1 1995
Externally publishedYes
EventProceedings of the 5th International Conference on Computer Vision - Cambridge, MA, USA
Duration: Jun 20 1995Jun 23 1995


OtherProceedings of the 5th International Conference on Computer Vision
CityCambridge, MA, USA

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Class-based grouping in perspective images'. Together they form a unique fingerprint.

Cite this