Human-Inspired Learning for Large Language Models via Obvious Record and Maximum-Entropy Method Discovery

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) excel at common pattern recognition but exhibit poor generalization to rare, low-resource, or out-of-distribution scenarios—e.g., niche hardware failures or anomalous IoT behaviors—and rely on implicit, non-auditable, and optimization-resistant method memory. Method: We propose a human-inspired learning framework featuring (i) explicit “salient recording” of causal pairs to construct symbolic method memory, and (ii) “maximum-entropy method discovery”, which actively preserves semantically diverse solutions within the policy space. The framework integrates causal retrieval augmentation, semantic similarity modeling, and a formal method diversity metric. Results: Evaluated on 60 semantically diverse question-answering benchmarks, our approach improves zero-shot problem coverage by 37%, increases intra-method solution diversity by 2.1×, and significantly enhances traceability, evolvability, and human-like method acquisition capability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel at extracting common patterns from large-scale corpora, yet they struggle with rare, low-resource, or previously unseen scenarios-such as niche hardware deployment issues or irregular IoT device behaviors-because such cases are sparsely represented in training data. Moreover, LLMs rely primarily on implicit parametric memory, which limits their ability to explicitly acquire, recall, and refine methods, causing them to behave predominantly as intuition-driven predictors rather than deliberate, method-oriented learners. Inspired by how humans learn from rare experiences, this paper proposes a human-inspired learning framework that integrates two complementary mechanisms. The first, Obvious Record, explicitly stores cause--result (or question--solution) relationships as symbolic memory, enabling persistent learning even from single or infrequent encounters. The second, Maximum-Entropy Method Discovery, prioritizes and preserves methods with high semantic dissimilarity, allowing the system to capture diverse and underrepresented strategies that are typically overlooked by next-token prediction. Verification on a benchmark of 60 semantically diverse question--solution pairs demonstrates that the proposed entropy-guided approach achieves stronger coverage of unseen questions and significantly greater internal diversity than a random baseline, confirming its effectiveness in discovering more generalizable and human-inspired methods.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with rare or unseen scenarios due to sparse training data.
LLMs lack explicit method acquisition, recall, and refinement capabilities.
The paper aims to enhance LLMs' learning from infrequent experiences.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Obvious Record stores cause-result relationships symbolically
Maximum-Entropy Method Discovery prioritizes high dissimilarity strategies
Framework integrates symbolic memory with entropy-guided method selection