Secure Federated Learning Across Heterogeneous Cloud and High-Performance Computing Resources: A Case Study on Federated Fine-Tuning of LLaMA 2

Zilinghan Li, Shilan He, Pranshu Chaturvedi, Volodymyr Kindratenko, Eliu A. Huerta, Kibaek Kim, Ravi Madduri

Research output: Contribution to journalArticlepeer-review

Abstract

Federated learning enables multiple data owners to collaboratively train robust machine learning models without transferring large or sensitive local datasets by only sharing the parameters of the locally trained models. In this article, we elaborate on the design of our Advanced Privacy-Preserving Federated Learning (APPFL) framework, which streamlines end-to-end secure and reliable federated learning experiments across cloud computing facilities and high-performance computing resources by leveraging Globus Compute, a distributed function as a service platform, and Amazon Web Services. We further demonstrate the use case of APPFL in fine-tuning an LLaMA 2 7B model using several cloud resources and supercomputers.

Original languageEnglish (US)
Pages (from-to)52-58
Number of pages7
JournalComputing in Science and Engineering
Volume26
Issue number3
DOIs
StatePublished - 2024

ASJC Scopus subject areas

  • General Computer Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Secure Federated Learning Across Heterogeneous Cloud and High-Performance Computing Resources: A Case Study on Federated Fine-Tuning of LLaMA 2'. Together they form a unique fingerprint.

Cite this