MADIL: An MDL-based Framework for Efficient Program Synthesis in the ARC Benchmark

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses few-shot abstract reasoning on the ARC benchmark, where models must generalize from minimal examples without prior exposure to task semantics. Method: We propose an efficient program synthesis framework grounded in the Minimum Description Length (MDL) principle—novelly integrating MDL throughout the synthesis pipeline via symbolic pattern decomposition, structured task decomposition, and constrained search over a domain-specific program space, enabling causal pattern discovery without large-scale pretraining. Contribution/Results: Our approach reduces computational cost by two orders of magnitude compared to LLM-based methods and achieves 7% accuracy on the ArcPrize 2024 evaluation—outperforming comparably sized models. It offers strong interpretability through explicit program traces, high data efficiency (requiring only one or few examples), and zero-shot transfer potential. By grounding induction in formal, verifiable program synthesis, the framework establishes a new paradigm for few-shot inductive learning that is both traceable and empirically falsifiable.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) has achieved remarkable success in specialized tasks but struggles with efficient skill acquisition and generalization. The Abstraction and Reasoning Corpus (ARC) benchmark evaluates intelligence based on minimal training requirements. While Large Language Models (LLMs) have recently improved ARC performance, they rely on extensive pre-training and high computational costs. We introduce MADIL (MDL-based AI), a novel approach leveraging the Minimum Description Length (MDL) principle for efficient inductive learning. MADIL performs pattern-based decomposition, enabling structured generalization. While its performance (7% at ArcPrize 2024) remains below LLM-based methods, it offers greater efficiency and interpretability. This paper details MADIL's methodology, its application to ARC, and experimental evaluations.
Problem

Research questions and friction points this paper is trying to address.

Efficient skill acquisition and generalization in AI
Reducing computational costs for ARC benchmark tasks
Improving interpretability in program synthesis methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses MDL principle for inductive learning
Performs pattern-based decomposition for generalization
Offers efficiency and interpretability
🔎 Similar Papers
No similar papers found.