π€ AI Summary
To address the challenges of federated fine-tuning of large vision-language models (VLMs) on resource-constrained clients, this paper proposes FΒ³OCUS: a novel framework that introduces hierarchical Neural Tangent Kernel (NTK) principal eigenvalue magnitude to quantify layer importance and explicitly models inter-client layer-wise diversity, formulating a data-agnostic, multi-objective co-optimization problem. It jointly optimizes importance and diversity using five metaheuristic algorithms and integrates parameter-efficient fine-tuning (PEFT) for lightweight federated adaptation. Contributions include: (1) a new layer importance metric grounded in NTK spectral analysis; (2) MedVQA-FLβthe first federated benchmark for medical visual question answering; and (3) comprehensive evaluation across six task categories, 58 medical imaging datasets, and four VLM architectures, with over 10,000 client-level experiments demonstrating significant improvements in accuracy and generalization while reducing communication and computational overhead.
π Abstract
Effective training of large Vision-Language Models (VLMs) on resource-constrained client devices in Federated Learning (FL) requires the usage of parameter-efficient fine-tuning (PEFT) strategies. To this end, we demonstrate the impact of two factors extit{viz.}, client-specific layer importance score that selects the most important VLM layers for fine-tuning and inter-client layer diversity score that encourages diverse layer selection across clients for optimal VLM layer selection. We first theoretically motivate and leverage the principal eigenvalue magnitude of layerwise Neural Tangent Kernels and show its effectiveness as client-specific layer importance score. Next, we propose a novel layer updating strategy dubbed F$^3$OCUS that jointly optimizes the layer importance and diversity factors by employing a data-free, multi-objective, meta-heuristic optimization on the server. We explore 5 different meta-heuristic algorithms and compare their effectiveness for selecting model layers and adapter layers towards PEFT-FL. Furthermore, we release a new MedVQA-FL dataset involving overall 707,962 VQA triplets and 9 modality-specific clients and utilize it to train and evaluate our method. Overall, we conduct more than 10,000 client-level experiments on 6 Vision-Language FL task settings involving 58 medical image datasets and 4 different VLM architectures of varying sizes to demonstrate the effectiveness of the proposed method.