🤖 AI Summary
This work addresses the lack of quantified risk likelihood in SEC risk disclosures, which stems from the absence of large-scale supervised data linking disclosed risks to actual outcomes. To overcome this, the authors propose the Foresight Learning paradigm, which constructs temporally bounded risk queries from sequential SEC filings via an automated pipeline and leverages subsequent disclosures to auto-annotate outcomes—eliminating the need for manual labeling or external data. By integrating probability calibration techniques, the approach enables efficient training and deployment of compact large language models on a single GPU. The resulting model significantly outperforms pretrained baselines, heuristic methods, and even state-of-the-art general-purpose models such as GPT-5 in both predictive accuracy and calibration of risk probabilities.
📝 Abstract
Risk disclosures in SEC filings describe potential adverse events but rarely quantify their likelihood, limiting their usefulness for probabilistic analysis. A central obstacle is the absence of large-scale, risk-level supervision linking disclosed risks to realized outcomes. We introduce a fully automated data generation pipeline that converts qualitative SEC risk disclosures into temporally grounded supervision using only public data. For each filing, the pipeline generates firm-specific, time-bounded risk queries from the Risk Factors section and labels them by automatically resolving outcomes against subsequent disclosures. Using this dataset of risk queries and outcomes grounded in SEC filings, we train a compact large language model to estimate the probability that a disclosed risk will materialize within a specified horizon. Despite its modest size, the resulting model substantially improves over pretrained and heuristic baselines, and outperforms frontier general-purpose models, including GPT-5, on probabilistic accuracy and calibration. More broadly, this work demonstrates that Foresight Learning enables scalable and fully automated training of domain-specific expert models using only raw, chronological, in-domain text -- without proprietary data, external corpora, or manual annotation. The resulting models achieve frontier-level performance while remaining deployable on a single GPU. This result suggests a general pathway for learning calibrated, decision-relevant signals from naturally occurring enterprise documents. To support transparency and reproducibility, we open-source the evaluation dataset used in this study. Evaluation Data: https://huggingface.co/datasets/LightningRodLabs/sec_risk_questions_test_set Data Generation Platform: https://lightningrod.ai/ SDK: https://github.com/lightning-rod-labs/lightningrod-python-sdk