Do We Need Large VLMs for Spotting Soccer Actions?

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional soccer action recognition relies on computationally expensive video-based models. Method: This paper challenges this paradigm by proposing a purely text-driven, zero-shot action localization framework that operates solely on expert commentary—requiring no video input or model training. We introduce a novel three-role large language model (LLM) collaboration framework—modeling outcome, excitement, and tactics—and apply it via sliding-window parsing over SoccerNet Echoes commentary to achieve fine-grained localization of events (e.g., goals, cautions, substitutions). Contribution/Results: Our lightweight, scalable approach achieves accuracy comparable to state-of-the-art video-based methods while drastically reducing computational overhead. It is the first work to empirically validate LLMs’ capability for spatiotemporal action localization without visual signals, establishing a new paradigm for sports understanding grounded in linguistic semantics alone.

Technology Category

Application Category

📝 Abstract
Traditional video-based tasks like soccer action spotting rely heavily on visual inputs, often requiring complex and computationally expensive models to process dense video data. In this work, we propose a shift from this video-centric approach to a text-based task, making it lightweight and scalable by utilizing Large Language Models (LLMs) instead of Vision-Language Models (VLMs). We posit that expert commentary, which provides rich, fine-grained descriptions and contextual cues such as excitement and tactical insights, contains enough information to reliably spot key actions in a match. To demonstrate this, we use the SoccerNet Echoes dataset, which provides timestamped commentary, and employ a system of three LLMs acting as judges specializing in outcome, excitement, and tactics. Each LLM evaluates sliding windows of commentary to identify actions like goals, cards, and substitutions, generating accurate timestamps for these events. Our experiments show that this language-centric approach performs effectively in detecting critical match events, providing a lightweight and training-free alternative to traditional video-based methods for action spotting.
Problem

Research questions and friction points this paper is trying to address.

Replacing video-based soccer action spotting with text-based methods
Using LLMs instead of VLMs for lightweight, scalable action detection
Leveraging expert commentary to identify key match events accurately
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shift from video to text using LLMs
Three specialized LLMs judge commentary
Training-free lightweight action spotting
🔎 Similar Papers
No similar papers found.