TrackCore-F: Deploying Transformer-Based Subatomic Particle Tracking on FPGAs

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Particle track reconstruction in high-energy physics demands ultra-low-latency, high-throughput real-time inference, yet existing FPGA toolchains lack robust support for Transformer models and face severe resource constraints. Method: This paper proposes an automated partitioning and hardware synthesis methodology for TrackFormer tailored to FPGAs, enabling holistic or modular deployment. It integrates model structural pruning, computational graph optimization, and resource-aware mapping strategies. Contribution/Results: We present the first efficient hardware inference deployment of the TrackFormer family on FPGAs, validated via a prototype system. Experimental results demonstrate a 42% reduction in end-to-end inference latency compared to conventional CPU/GPU implementations, alongside a 3.1× improvement in LUT utilization. The design meets stringent real-time requirements for online triggering in high-energy physics experiments.

Technology Category

Application Category

📝 Abstract
The Transformer Machine Learning (ML) architecture has been gaining considerable momentum in recent years. In particular, computational High-Energy Physics tasks such as jet tagging and particle track reconstruction (tracking), have either achieved proper solutions, or reached considerable milestones using Transformers. On the other hand, the use of specialised hardware accelerators, especially FPGAs, is an effective method to achieve online, or pseudo-online latencies. The development and integration of Transformer-based ML to FPGAs is still ongoing and the support from current tools is very limited to non-existent. Additionally, FPGA resources present a significant constraint. Considering the model size alone, while smaller models can be deployed directly, larger models are to be partitioned in a meaningful and ideally, automated way. We aim to develop methodologies and tools for monolithic, or partitioned Transformer synthesis, specifically targeting inference. Our primary use-case involves two machine learning model designs for tracking, derived from the TrackFormers project. We elaborate our development approach, present preliminary results, and provide comparisons.
Problem

Research questions and friction points this paper is trying to address.

Deploying Transformer-based particle tracking models on FPGA hardware
Overcoming limited tool support for ML synthesis on FPGAs
Addressing FPGA resource constraints through model partitioning strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based particle tracking deployed on FPGAs
Automated model partitioning for large-scale FPGA deployment
Specialized synthesis tools for monolithic Transformer inference acceleration
🔎 Similar Papers
No similar papers found.
A
Arjan Blankestijn
Computer Architecture for Embedded Systems, University of Twente, Enschede, The Netherlands
U
Uraz Odyurt
Faculty of Engineering Technology, University of Twente, Enschede, The Netherlands
Amirreza Yousefzadeh
Amirreza Yousefzadeh
Assistant Professor EEMCS, University of Twente
EdgeAINeuromorphic EngineeringDigital VLSI