SplitLoRA: A Split Parameter-Efficient Fine-Tuning
Framework for Large Language Models



Abstract

The scalability of large language models (LLMs) has led to significant achievements in key domains. Despite the urgent need for more training data, there is a concerning depletion of high-quality public datasets anticipated within a few years. To address this, the federated learning (FL) LLM fine-tuning paradigm has been proposed to enable collaborative LLM fine-tuning on distributed private data. However, the large size of LLMs presents significant challenges to the democratization of this FL fine-tuning paradigm. To mitigate this, split learning (SL) has emerged as a promising solution by offloading the primary training workload to a server through model partitioning. Nonetheless, research on the SL LLM fine-tuning paradigm remains in its early stages. To fill this gap, we propose SplitLoRA, the first SL LLM fine-tuning framework. Built on the split federated learning (SFL) framework, SplitLoRA combines the advantages of parallel training from FL and model splitting from SL, significantly enhancing training efficiency. As the inaugural open-source benchmark for SL LLM fine-tuning, SplitLoRA provides a foundation for research efforts aimed at advancing this field. Extensive simulations validate that SplitLoRA achieves target accuracy in significantly less time than state-of-the-art LLM fine-tuning frameworks.



TLDR takeaways

SplitLoRA consists of three fundamental components:

Our SplitLoRA framework involves the following steps:


Key observations

We compare SplitLoRA with two canonical benchmarks:

Performance Evaluation

1. Perplexity (PPL) performance evaluation, where a lower PPL indicates better predictive performance.

GPT2-S GPT2-M

2. Converged accuracy for E2E NLG challenge.

GPT2-S GPT2-M

3. Convergence rate.

GPT2-S GPT2-M

4. The number of trainable parameters.

For more detailed experimental results, please refer to our paper.

Future Direction about SplitLoRA






Website template edited from Colorful Colorization.