On the effectiveness of Laplacian normalization for graph semi-supervised learning

Rie Johnson, Tong Zhang

Research output: Contribution to journalArticlepeer-review


This paper investigates the effect of Laplacian normalization in graph-based semi-supervised learning. To this end, we consider multi-class transductive learning on graphs with Laplacian regularization. Generalization bounds are derived using geometric properties of the graph. Specifically, by introducing a definition of graph cut from learning theory, we obtain generalization bounds that depend on the Laplacian regularizer. We then use this analysis to better understand the role of graph Laplacian matrix normalization. Under assumptions that the cut is small, we derive near-optimal normalization factors by approximately minimizing the generalization bounds. The analysis reveals the limitations of the standard degree-based normalization method in that the resulting normalization factors can vary significantly within each connected component with the same class label, which may cause inferior generalization performance. Our theory also suggests a remedy that does not suffer from this problem. Experiments confirm the superiority of the normalization scheme motivated by learning theory on artificial and real-world data sets.

Original languageEnglish (US)
Pages (from-to)1489-1517
Number of pages29
JournalJournal of Machine Learning Research
StatePublished - Jul 2007
Externally publishedYes


  • Graph learning
  • Laplacian regularization
  • Normalization of graph Laplacian
  • Transductive learning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence


Dive into the research topics of 'On the effectiveness of Laplacian normalization for graph semi-supervised learning'. Together they form a unique fingerprint.

Cite this