This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes parameters and computational complexity while striving for optimal downstream task performance. FMs, like ChatGPT, DALL-E, and LLaVA specialize in language understanding, generative tasks, and multimodal tasks, trained on diverse datasets spanning text, images, and videos. The diversity of FMs guides various adaptation strategies for PEFT. Therefore, this survey aims to provide a comprehensive overview of PEFT techniques applied to diverse FMs and address critical gaps in understanding the techniques, trends, and applications. We start by providing a detailed development of FMs and PEFT. Subsequently, we systematically review the key categories and core mechanisms of PEFT across diverse FMs to offer a comprehensive understanding of trends. We also explore the most recent applications across various FMs to demonstrate the versatility of PEFT, shedding light on the integration of systematic PEFT methods with a range of FMs. Furthermore, we identify potential research and development directions for improving PEFTs in the future. This survey provides a valuable resource for both newcomers and experts seeking to understand and use the power of PEFT across FMs.
Figure 1: An overview of trends in PEFT methods in various FMs (LLM, VFM, VLM, MFM, and VGM). The number of citations from Semantic Scholar serves as a trend indicator.
Fig. 2: Left: Versatile scenarios and applications in the era of FMs. Right: A detailed illustration of four common PEFT methods (Selective, Additive, Prompt, and Reparameterization PEFT).
Tab. 1: The overview of recent PEFT methods primarily comprises several elements: Approach, Venue, Modal of FMs, Category of PEFT, SC indicates that the structure of FMS has changed, Position denotes the fine-tuned parameter position, IE means inference efficiency, Addition of Parameters, and the percentage of Trainable Parameters. Note that the "-" represents that the paper does not provide a clear result.
Fig. 3: Illustration of representative adapter PEFT across various FMs.
Fig. 4: Illustration of representative prompt PEFT across various FMs.
Fig. 5: Illustration of representative groups of LoRA PEFT.
Fig. 6: Prevailing PEFT in VGMs. A. Adapter tuning in diffusion model; B. LoRA tuning in diffusion model; C. Reward tuning in diffusion model.
In conclusion, the integration of PEFT with FMs showcases a promising avenue for efficient model adaptation across various tasks and domains. As highlighted in this survey, the rapid evolution of FMs and the active PEFT community underscore the importance of staying abreast of technological trends for optimal performance. By exploring adaptation strategies such as Selective, Additive, Prompt, Reparameterization, and Hybrid PEFT, and across different model structures (e.g., LLM, VFM, VLM, MFM, and VGM), this survey offers insights into enhancing efficiency and effectiveness. The survey emphasizes the need for a systematic understanding of PEFT techniques in the context of diverse FMs, paving the way for future advancements and applications in the field.
If you find our work helpful, please kindly cite our paper:
@misc{zhang2025parameterefficientfinetuningfoundationmodels,
title={Parameter-Efficient Fine-Tuning for Foundation Models},
author={Dan Zhang and Tao Feng and Lilong Xue and Yuandong Wang and Yuxiao Dong and Jie Tang},
year={2025},
eprint={2501.13787},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.13787},
}