Greedy Modality Selection via Approximate Submodular Maximization

Runxiang Cheng, Gargi Balasubramaniam, Yifei He, Yao Hung Hubert Tsai, Han Zhao

Research output: Contribution to journalConference articlepeer-review

Abstract

Multimodal learning considers learning from multi-modality data, aiming to fuse heterogeneous sources of information. However, it is not always feasible to leverage all available modalities due to memory constraints. Further, training on all the modalities may be inefficient when redundant information exists within data, such as different subsets of modalities providing similar performance. In light of these challenges, we study modality selection, intending to efficiently select the most informative and complementary modalities under certain computational constraints. We formulate a theoretical framework for optimizing modality selection in multimodal learning and introduce a utility measure to quantify the benefit of selecting a modality. For this optimization problem, we present efficient algorithms when the utility measure exhibits monotonicity and approximate submodularity. We also connect the utility measure with existing Shapley-value-based feature importance scores. Last, we demonstrate the efficacy of our algorithm on synthetic (Patch-MNIST) and real-world (PEMS-SF, CMU-MOSI) datasets.

Original languageEnglish (US)
Pages (from-to)389-399
Number of pages11
JournalProceedings of Machine Learning Research
Volume180
StatePublished - 2022
Event38th Conference on Uncertainty in Artificial Intelligence, UAI 2022 - Eindhoven, Netherlands
Duration: Aug 1 2022Aug 5 2022

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Greedy Modality Selection via Approximate Submodular Maximization'. Together they form a unique fingerprint.

Cite this