🤖 AI Summary
To address the deployment challenges of 3D medical image segmentation models in resource-constrained clinical settings, this paper introduces the first end-to-end, GPU-native 8-bit post-training quantization (PTQ) framework—breaking away from conventional fake quantization by enabling true 8-bit quantization for both weights and activations, with native TensorRT engine support. The method requires no labeled calibration data and is compatible with mainstream 3D architectures, including nnU-Net, SwinUNETR, TransUNet, and VISTA3D. Evaluated on BTCV, Whole Brain, and TotalSegmentator V2 benchmarks, it achieves zero Dice score degradation while reducing model size by 4× and accelerating GPU inference by 1.8–2.3×. All code and pre-trained quantized models are publicly released.
📝 Abstract
Quantizing deep neural networks ,reducing the precision (bit-width) of their computations, can remarkably decrease memory usage and accelerate processing, making these models more suitable for large-scale medical imaging applications with limited computational resources. However, many existing methods studied"fake quantization", which simulates lower precision operations during inference, but does not actually reduce model size or improve real-world inference speed. Moreover, the potential of deploying real 3D low-bit quantization on modern GPUs is still unexplored. In this study, we introduce a real post-training quantization (PTQ) framework that successfully implements true 8-bit quantization on state-of-the-art (SOTA) 3D medical segmentation models, i.e., U-Net, SegResNet, SwinUNETR, nnU-Net, UNesT, TransUNet, ST-UNet,and VISTA3D. Our approach involves two main steps. First, we use TensorRT to perform fake quantization for both weights and activations with unlabeled calibration dataset. Second, we convert this fake quantization into real quantization via TensorRT engine on real GPUs, resulting in real-world reductions in model size and inference latency. Extensive experiments demonstrate that our framework effectively performs 8-bit quantization on GPUs without sacrificing model performance. This advancement enables the deployment of efficient deep learning models in medical imaging applications where computational resources are constrained. The code and models have been released, including U-Net, TransUNet pretrained on the BTCV dataset for abdominal (13-label) segmentation, UNesT pretrained on the Whole Brain Dataset for whole brain (133-label) segmentation, and nnU-Net, SegResNet, SwinUNETR and VISTA3D pretrained on TotalSegmentator V2 for full body (104-label) segmentation. https://github.com/hrlblab/PTQ.