LoRAtorio: An intrinsic approach to LoRA Skill Composition

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to efficiently compose an arbitrary number of heterogeneous LoRA adapters in open-world scenarios. This paper proposes LoRAtorio—a training-free, dynamic multi-LoRA composition framework for visual concept combination and controllable generation in text-to-image diffusion models. Its core contributions are: (1) a spatially aware weight matrix derived from latent-space noise prediction discrepancies; (2) a block-wise cosine similarity–driven weighted aggregation strategy; and (3) an extended classifier-free guidance mechanism to mitigate domain shift and enable dynamic adapter selection. Experiments demonstrate that LoRAtorio achieves state-of-the-art performance: +1.3% ClipScore improvement and a 72.43% win rate in GPT-4V pairwise evaluation, while maintaining compatibility across diverse latent diffusion architectures.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) has become a widely adopted technique in text-to-image diffusion models, enabling the personalisation of visual concepts such as characters, styles, and objects. However, existing approaches struggle to effectively compose multiple LoRA adapters, particularly in open-ended settings where the number and nature of required skills are not known in advance. In this work, we present LoRAtorio, a novel train-free framework for multi-LoRA composition that leverages intrinsic model behaviour. Our method is motivated by two key observations: (1) LoRA adapters trained on narrow domains produce denoised outputs that diverge from the base model, and (2) when operating out-of-distribution, LoRA outputs show behaviour closer to the base model than when conditioned in distribution. The balance between these two observations allows for exceptional performance in the single LoRA scenario, which nevertheless deteriorates when multiple LoRAs are loaded. Our method operates in the latent space by dividing it into spatial patches and computing cosine similarity between each patch's predicted noise and that of the base model. These similarities are used to construct a spatially-aware weight matrix, which guides a weighted aggregation of LoRA outputs. To address domain drift, we further propose a modification to classifier-free guidance that incorporates the base model's unconditional score into the composition. We extend this formulation to a dynamic module selection setting, enabling inference-time selection of relevant LoRA adapters from a large pool. LoRAtorio achieves state-of-the-art performance, showing up to a 1.3% improvement in ClipScore and a 72.43% win rate in GPT-4V pairwise evaluations, and generalises effectively to multiple latent diffusion models.
Problem

Research questions and friction points this paper is trying to address.

Effective composition of multiple LoRA adapters in diffusion models
Addressing domain drift in LoRA outputs during multi-adapter use
Dynamic selection of relevant LoRA adapters from a large pool
Innovation

Methods, ideas, or system contributions that make the work stand out.

Train-free multi-LoRA composition framework
Spatial patch cosine similarity weighting
Dynamic module selection for LoRA adapters
🔎 Similar Papers
No similar papers found.