Learning on Model Weights using Tree Experts

📅 2024-10-17
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Inferring the functionality of publicly available but undocumented model weights is highly challenging due to pervasive nuisance variations—semantically irrelevant perturbations—in the weight space. To address this, we propose ProbeX, the first theory-driven probing framework specifically designed for single-hidden-layer weights and grounded in the structural properties of Model Trees. Leveraging the low-nuisance characteristic inherent to model families organized as trees, ProbeX integrates a linear classifier with a hierarchical expert mechanism to achieve efficient and interpretable mapping from weight space to language embedding space. Experiments demonstrate that ProbeX significantly outperforms existing baselines on cross-model-category prediction tasks. Notably, it achieves zero-shot language-space mapping and semantic classification of Stable Diffusion weights—the first such result—thereby establishing a novel paradigm for functional reverse-engineering of black-box models.

Technology Category

Application Category

📝 Abstract
The increasing availability of public models begs the question: can we train neural networks that use other networks as input? Such models allow us to study different aspects of a given neural network, for example, determining the categories in a model's training dataset. However, machine learning on model weights is challenging as they often exhibit significant variation unrelated to the models' semantic properties (nuisance variation). Here, we identify a key property of real-world models: most public models belong to a small set of Model Trees, where all models within a tree are fine-tuned from a common ancestor (e.g., a foundation model). Importantly, we find that within each tree there is less nuisance variation between models. Concretely, while learning across Model Trees requires complex architectures, even a linear classifier trained on a single model layer often works within trees. While effective, these linear classifiers are computationally expensive, especially when dealing with larger models that have many parameters. To address this, we introduce Probing Experts (ProbeX), a theoretically motivated and lightweight method. Notably, ProbeX is the first probing method specifically designed to learn from the weights of a single hidden model layer. We demonstrate the effectiveness of ProbeX by predicting the categories in a model's training dataset based only on its weights. Excitingly, ProbeX can also map the weights of Stable Diffusion into a shared weight-language embedding space, enabling zero-shot model classification.
Problem

Research questions and friction points this paper is trying to address.

Identifying model functionality from undocumented weights
Reducing nuisance variation within Model Trees
Enabling lightweight, layer-specific probing for model classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning from model weights using linear classifiers
Introducing Probing Experts (ProbeX) method
Mapping weights to text for zero-shot classification
🔎 Similar Papers
No similar papers found.
E
Eliahu Horwitz
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
B
Bar Cavia
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
J
Jonathan Kahana
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
Yedid Hoshen
Yedid Hoshen
The Hebrew University of Jerusalem
Deep LearningAIComputer Vision