Functionality-Oriented LLM Merging on the Fisher--Rao Manifold

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing large language model merging techniques, which typically rely on Euclidean linear operations in parameter space and often suffer from representation collapse—especially when merging highly divergent models—due to the absence of a unified optimization objective. The paper introduces the first information-geometric framework for model merging, formulating it as a weighted Karcher mean problem on the Fisher–Rao manifold. By minimizing the KL divergence between predictive distributions, the approach enables functionally aligned fusion of multiple expert models. The proposed method naturally supports stable merging of an arbitrary number of heterogeneous models while avoiding representation collapse. An efficient fixed-point iteration algorithm is developed using a lightweight spherical proxy model. Experiments demonstrate that the method significantly outperforms current baselines across diverse benchmarks and collapse diagnostics, maintaining high accuracy even as the number and heterogeneity of merged models increase.

Technology Category

Application Category

📝 Abstract
Weight-space merging aims to combine multiple fine-tuned LLMs into a single model without retraining, yet most existing approaches remain fundamentally parameter-space heuristics. This creates three practical limitations. First, linear averaging, task vectors, and related rules operate on Euclidean coordinates, even though the desired goal is to merge functionality, i.e., predictive behaviors across tasks. Second, when the source checkpoints are farther apart or more heterogeneous, Euclidean blends often trigger representation collapse, manifested as activation variance shrinkage and effective-rank degradation, which sharply degrades accuracy. Third, many geometry-inspired methods are most natural for two-model interpolation and do not extend cleanly to merging N>2 experts with a principled objective. We address these issues by formulating model merging as computing a weighted Karcher mean on the Fisher--Rao manifold, which is locally equivalent to minimizing a KL-based function distance between predictive distributions. We derive a practical fixed-point algorithm using a lightweight spherical proxy that preserves norms and generalizes directly to multi-expert merging. Across various benchmarks and collapse diagnostics, our method remains stable as the number and heterogeneity of merged models increase, consistently outperforming prior baselines.
Problem

Research questions and friction points this paper is trying to address.

LLM merging
functionality merging
representation collapse
Fisher-Rao manifold
multi-expert merging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fisher–Rao manifold
Karcher mean
LLM merging
functionality-oriented fusion
multi-expert merging
🔎 Similar Papers
No similar papers found.