Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models

πŸ“… 2024-10-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates the modular structure and compositional mechanisms of reusable computational subgraphs (β€œcircuits”) in Transformer language models. We study ten string-editing tasks grounded in probabilistic context-free grammars and introduce a mechanistic interpretability framework: (1) identifying minimal task-specific functional circuits; (2) systematically analyzing functional similarity and composability across tasks via cross-task comparison, targeted perturbation experiments, and set-theoretic operations (e.g., intersection, union). Our key contributions are: first, empirical evidence that functionally similar circuits exhibit significant node overlap and cross-task fidelity; second, demonstration that distinct circuits can be effectively reused and composed via set operations to support more complex capabilities. These findings reveal a highly modular circuit-level organization within LLMs, establishing a new circuit-centric paradigm for model decomposition, functional attribution, and controllable editing.

Technology Category

Application Category

πŸ“ Abstract
A fundamental question in interpretability research is to what extent neural networks, particularly language models, implement reusable functions through subnetworks that can be composed to perform more complex tasks. Recent advances in mechanistic interpretability have made progress in identifying $ extit{circuits}$, which represent the minimal computational subgraphs responsible for a model's behavior on specific tasks. However, most studies focus on identifying circuits for individual tasks without investigating how functionally similar circuits $ extit{relate}$ to each other. To address this gap, we study the modularity of neural networks by analyzing circuits for highly compositional subtasks within a transformer-based language model. Specifically, given a probabilistic context-free grammar, we identify and compare circuits responsible for ten modular string-edit operations. Our results indicate that functionally similar circuits exhibit both notable node overlap and cross-task faithfulness. Moreover, we demonstrate that the circuits identified can be reused and combined through set operations to represent more complex functional model capabilities.
Problem

Research questions and friction points this paper is trying to address.

exploring modular structures
identifying reusable subnetwork functions
analyzing circuit compositionality in transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based language models
Mechanistic interpretability circuits
Modular string-edit operations
πŸ”Ž Similar Papers
No similar papers found.