Learning a maximum margin subspace for image retrieval

Xiaofei He, Deng Cai, Jiawei Han

Research output: Contribution to journalArticlepeer-review


One of the fundamental problems in Content-Based Image Retrieval (CBIR) has been the gap between low-level visual features and high-level semantic concepts. To narrow down this gap, relevance feedback is introduced into image retrieval. With the user-provided information, a classifier can be learned to distinguish between positive and negative examples. However, in real-world applications, the number of user feedbacks Is usually too small compared to the dimensionality of the image space. In order to cope with the high dimensionality, we propose a novel semlsupervised method for dimensionality reduction called Maximum Margin Projection (MMP). MMP aims at maximizing the margin between positive and negative examples at each local neighborhood. Different from traditional dimensionality reduction algorithms such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which effectively see only the global euclidean structure, MMP Is designed for discovering the local manifold structure. Therefore, MMP is likely to be more suitable for image retrieval, where nearest neighbor search is usually involved. After projecting the images into a lower dimensional subspace, the relevant images get closer to the query image; thus, the retrieval performance can be enhanced. The experimental results on Corel image database demonstrate the effectiveness of our proposed algorithm.

Original languageEnglish (US)
Pages (from-to)189-201
Number of pages13
JournalIEEE Transactions on Knowledge and Data Engineering
Issue number2
StatePublished - Feb 2008


  • Dimensionality reduction
  • Image retrieval
  • Multimedia information systems
  • Relevance feedback

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics


Dive into the research topics of 'Learning a maximum margin subspace for image retrieval'. Together they form a unique fingerprint.

Cite this