TransForSeg: A Multitask Stereo ViT for Joint Stereo Segmentation and 3D Force Estimation in Catheterization

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly modeling catheter stereo segmentation and 3D force estimation in interventional procedures, this paper proposes the first multi-task stereo vision Transformer tailored for dual-view X-ray images. Our method employs a shared patch-based Vision Transformer (ViT) encoder coupled with a dual-branch decoder—comprising a segmentation head and a fused regression head—to perform end-to-end, pixel-level stereo segmentation and 3D tip-force prediction simultaneously. By directly capturing long-range spatial dependencies, it overcomes the inherent limitation of CNNs in progressively expanding receptive fields. This work pioneers the application of Vision Transformers to stereo X-ray catheter analysis. Evaluated on a synthetic dataset, our model achieves state-of-the-art performance on both segmentation and 3D force estimation tasks, significantly enhancing visual–haptic synergistic perception during minimally invasive interventions.

Technology Category

Application Category

📝 Abstract
Recently, the emergence of multitask deep learning models has enhanced catheterization procedures by providing tactile and visual perception data through an end-to-end architec- ture. This information is derived from a segmentation and force estimation head, which localizes the catheter in X-ray images and estimates the applied pressure based on its deflection within the image. These stereo vision architectures incorporate a CNN- based encoder-decoder that captures the dependencies between X-ray images from two viewpoints, enabling simultaneous 3D force estimation and stereo segmentation of the catheter. With these tasks in mind, this work approaches the problem from a new perspective. We propose a novel encoder-decoder Vision Transformer model that processes two input X-ray images as separate sequences. Given sequences of X-ray patches from two perspectives, the transformer captures long-range dependencies without the need to gradually expand the receptive field for either image. The embeddings generated by both the encoder and decoder are fed into two shared segmentation heads, while a regression head employs the fused information from the decoder for 3D force estimation. The proposed model is a stereo Vision Transformer capable of simultaneously segmenting the catheter from two angles while estimating the generated forces at its tip in 3D. This model has undergone extensive experiments on synthetic X-ray images with various noise levels and has been compared against state-of-the-art pure segmentation models, vision-based catheter force estimation methods, and a multitask catheter segmentation and force estimation approach. It outperforms existing models, setting a new state-of-the-art in both catheter segmentation and force estimation.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously segmenting catheter from stereo X-ray images
Estimating 3D forces at catheter tip during procedures
Capturing long-range dependencies in stereo vision without CNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stereo Vision Transformer for catheter segmentation
Multitask encoder-decoder with separate X-ray sequences
Fused decoder embeddings for 3D force estimation
🔎 Similar Papers
No similar papers found.