Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task-specific fine-tuning. However, in scenarios involving multiple tasks, training a separate LoRA model for each task results in considerable inefficiency in terms of storage and inference. Moreover, existing parameter generation methods fail to capture the correlations among these tasks, making multitask LoRA parameter generation challenging. To address these limitations, we propose the In-Context Meta LoRA (ICM-LoRA), a novel approach that efficiently achieves task-specific customization of large language models (LLMs). Specifically, we use training data from all tasks to train a tailored generator, Conditional Variational Autoencoder (CVAE). CVAE takes task descriptions as inputs and produces task-aware LoRA weights as outputs. These LoRA weights are then merged with LLMs to create task-specialized models without the need for additional fine-tuning. Furthermore, we utilize in-context meta-learning for knowledge enhancement and task mapping to capture the relationship between tasks and parameter distributions. Consequently, our method achieves more accurate LoRA parameter generation for diverse tasks using CVAE. ICM-LoRA enables more accurate LoRA parameter reconstruction than current parameter reconstruction methods and is useful for implementing task-specific enhancements to LoRA parameters. Simultaneously, our method occupies 283MB, which is only 1% of the storage space required by the original LoRA. The code is available at https://github.com/YihuaJerry/ICM-LoRA.

In-Context Meta LoRA Generation / Shao, Yihua; Yan, Minxi; Liu, Yang; Chen, Siyu; Chen, Wenjie; Long, Xinwei; Yan, Ziyang; Li, Lei; Zhang, Chenyu; Sebe, Nicu; Tang, Hao; Wang, Yan; Zhao, Hao; Wang, Mengzhu; Guo, Jingcai. - In: IJCAI. - ISSN 1045-0823. - (2025), pp. 6138-6146. ( 34th Internationa Joint Conference on Artificial Intelligence, IJCAI 2025 can 2025) [10.24963/ijcai.2025/683].

In-Context Meta LoRA Generation

Zhang, Chenyu;Sebe, Nicu;Tang, Hao;Zhao, Hao;
2025-01-01

Abstract

Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task-specific fine-tuning. However, in scenarios involving multiple tasks, training a separate LoRA model for each task results in considerable inefficiency in terms of storage and inference. Moreover, existing parameter generation methods fail to capture the correlations among these tasks, making multitask LoRA parameter generation challenging. To address these limitations, we propose the In-Context Meta LoRA (ICM-LoRA), a novel approach that efficiently achieves task-specific customization of large language models (LLMs). Specifically, we use training data from all tasks to train a tailored generator, Conditional Variational Autoencoder (CVAE). CVAE takes task descriptions as inputs and produces task-aware LoRA weights as outputs. These LoRA weights are then merged with LLMs to create task-specialized models without the need for additional fine-tuning. Furthermore, we utilize in-context meta-learning for knowledge enhancement and task mapping to capture the relationship between tasks and parameter distributions. Consequently, our method achieves more accurate LoRA parameter generation for diverse tasks using CVAE. ICM-LoRA enables more accurate LoRA parameter reconstruction than current parameter reconstruction methods and is useful for implementing task-specific enhancements to LoRA parameters. Simultaneously, our method occupies 283MB, which is only 1% of the storage space required by the original LoRA. The code is available at https://github.com/YihuaJerry/ICM-LoRA.
2025
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25)
New York
International Joint Conferences on Artificial Intelligence
Shao, Yihua; Yan, Minxi; Liu, Yang; Chen, Siyu; Chen, Wenjie; Long, Xinwei; Yan, Ziyang; Li, Lei; Zhang, Chenyu; Sebe, Nicu; Tang, Hao; Wang, Yan; Zha...espandi
In-Context Meta LoRA Generation / Shao, Yihua; Yan, Minxi; Liu, Yang; Chen, Siyu; Chen, Wenjie; Long, Xinwei; Yan, Ziyang; Li, Lei; Zhang, Chenyu; Sebe, Nicu; Tang, Hao; Wang, Yan; Zhao, Hao; Wang, Mengzhu; Guo, Jingcai. - In: IJCAI. - ISSN 1045-0823. - (2025), pp. 6138-6146. ( 34th Internationa Joint Conference on Artificial Intelligence, IJCAI 2025 can 2025) [10.24963/ijcai.2025/683].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/467778
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact