A Theoretical Analysis of Soft-Label vs Hard-Label Training in Neural Networks

Research output: Contribution to journalConference articlepeer-review

Abstract

Knowledge distillation, where a small student model learns from a pre-trained large teacher model, has achieved substantial empirical success since the seminal work of (Hinton et al., 2015). Despite prior theoretical studies exploring the benefits of knowledge distillation, an important question remains unanswered: why does soft-label training from the teacher require significantly fewer neurons than directly training a small neural network with hard labels? To address this, we first present motivating experimental results using simple neural network models on a binary classification problem. These results demonstrate that soft-label training consistently outperforms hard-label training in accuracy, with the performance gap becoming more pronounced as the dataset becomes increasingly difficult to classify. We then substantiate these observations with a theoretical contribution based on two-layer neural network models. Specifically, we show that soft-label training using gradient descent requires only O (1/γ2ϵ) neurons to achieve a classification loss averaged over epochs smaller than some ϵ > 0, where γ is the separation margin of the limiting kernel. In contrast, hard-label training requires O (1/γ4 ln (1ϵ )) neurons, as derived from an adapted version of the gradient descent analysis in (Ji and Telgarsky, 2020). This implies that when γ ≤ ϵ, i.e., when the dataset is challenging to classify, the neuron requirement for soft-label training can be significantly lower than that for hard-label training. Finally, we present experimental results on deep neural networks, further validating these theoretical findings.

Original languageEnglish (US)
Pages (from-to)1078-1089
Number of pages12
JournalProceedings of Machine Learning Research
Volume283
StatePublished - 2025
Event7th Annual Learning for Dynamics and Control Conference, L4DC 2025 - Ann Arbor, United States
Duration: Jun 4 2025Jun 6 2025

Keywords

  • Knowledge Distillation
  • Model Compression
  • Projected Gradient Descent

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A Theoretical Analysis of Soft-Label vs Hard-Label Training in Neural Networks'. Together they form a unique fingerprint.

Cite this