GPU-GLMB: Assessing the Scalability of GPU-Accelerated Multi-Hypothesis Tracking

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the strong inter-detection dependency, low parallelism, and high computational overhead of the Generalized Labeled Multi-Bernoulli (GLMB) filter in virtual sensor networks—stemming from its single-detection-per-scan assumption—this paper proposes a novel GLMB variant supporting multi-detection per sensor. By relaxing the conventional detection-wise dependency constraint, the method decouples the update process across detections, thereby significantly enhancing parallel scalability. It further leverages GPU architecture for efficient hypothesis management and state estimation. Experiments demonstrate near-linear scalability in runtime with increasing track count and maximum retained hypotheses, achieving substantially higher throughput than the standard GLMB implementation. The core contribution is the first integration of multi-detection inputs and dependency-free update mechanisms into the GLMB framework, enabling high-concurrency, low-latency distributed random finite set tracking.

Technology Category

Application Category

📝 Abstract
Much recent research on multi-target tracking has focused on multi-hypothesis approaches leveraging random finite sets. Of particular interest are labeled random finite set methods that maintain temporally coherent labels for each object. While these methods enjoy important theoretical properties as closed-form solutions to the multi-target Bayes filter, the maintenance of multiple hypotheses under the standard measurement model is highly computationally expensive, even when hypothesis pruning approximations are applied. In this work, we focus on the Generalized Labeled Multi-Bernoulli (GLMB) filter as an example of this class of methods. We investigate a variant of the filter that allows multiple detections per object from the same sensor, a critical capability when deploying tracking in the context of distributed networks of machine learning-based virtual sensors. We show that this breaks the inter-detection dependencies in the filter updates of the standard GLMB filter, allowing updates with significantly improved parallel scalability and enabling efficient deployment on GPU hardware. We report the results of a preliminary analysis of a GPU-accelerated implementation of our proposed GLMB tracker, with a focus on run time scalability with respect to the number of objects and the maximum number of retained hypotheses.
Problem

Research questions and friction points this paper is trying to address.

Enhances GLMB filter for multiple detections per object
Improves parallel scalability for GPU acceleration
Analyzes runtime scalability with object and hypothesis counts
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-accelerated GLMB filter for multi-target tracking
Breaks inter-detection dependencies to improve parallel scalability
Enables efficient deployment on GPU hardware for scalability
🔎 Similar Papers
No similar papers found.
P
Pranav Balakrishnan
Manning College of Information and Computer Sciences, University of Massachusetts Amherst, USA
S
Sidisha Barik
Manning College of Information and Computer Sciences, University of Massachusetts Amherst, USA
S
Sean M. O'Rourke
U.S. Army Combat Capabilities Development Command, Army Research Laboratory, Adelphi, MD, USA
Benjamin M. Marlin
Benjamin M. Marlin
Manning College of Information and Computer Sciences, UMass Amherst
Machine learning