Explaining Datasets in Words: Statistical Models with Natural Language Parameters

📅 2024-09-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak interpretability and semantically opaque parameters plague conventional statistical models. To address this, we propose directly parameterizing models using natural-language predicates (e.g., “discusses COVID”), applicable to clustering, time-series modeling, and classification. We introduce the first end-to-end, model-agnostic optimization framework that jointly learns continuous predicate embeddings and leverages LLM-guided discretization via prompt engineering. Additionally, we design a multimodal adapter interface supporting both textual and visual inputs uniformly. Evaluated on dialogue categorization, LLM capability assessment, semantic clustering of math problems, and image memorability explanation, our method significantly outperforms n-gram baselines. It delivers fine-grained, semantically aligned, and user-controllable explanations—enabling human-in-the-loop interpretation without sacrificing predictive performance.

Technology Category

Application Category

📝 Abstract
To make sense of massive data, we often fit simplified models and then interpret the parameters; for example, we cluster the text embeddings and then interpret the mean parameters of each cluster. However, these parameters are often high-dimensional and hard to interpret. To make model parameters directly interpretable, we introduce a family of statistical models -- including clustering, time series, and classification models -- parameterized by natural language predicates. For example, a cluster of text about COVID could be parameterized by the predicate"discusses COVID". To learn these statistical models effectively, we develop a model-agnostic algorithm that optimizes continuous relaxations of predicate parameters with gradient descent and discretizes them by prompting language models (LMs). Finally, we apply our framework to a wide range of problems: taxonomizing user chat dialogues, characterizing how they evolve across time, finding categories where one language model is better than the other, clustering math problems based on subareas, and explaining visual features in memorable images. Our framework is highly versatile, applicable to both textual and visual domains, can be easily steered to focus on specific properties (e.g. subareas), and explains sophisticated concepts that classical methods (e.g. n-gram analysis) struggle to produce.
Problem

Research questions and friction points this paper is trying to address.

Statistical Model Interpretability
Natural Language Description
Complex Data Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical Methods
Optimized Algorithm
Language Models
🔎 Similar Papers
No similar papers found.