π€ AI Summary
This work addresses the scarcity of large-scale multimodal datasets that has hindered semantic-driven CAD modeling research. The authors introduce a multimodal CAD dataset comprising 242,000 industrial parts, featuring precise alignment among 3D models in STEP/SLDPRT formats, parametric modeling scripts, multi-view synthetic images, and human-verified natural language descriptions. To enable this alignment, they develop a lossless codec supporting 13 categories of CAD commands and integrate the lightweight multimodal model Qwen2.5-VL-7B for validation. Experimental results demonstrate that multimodal inputs combining text and images significantly outperform text-only approaches, confirming the datasetβs effectiveness for CAD generation tasks. This study establishes a high-quality benchmark and a scalable toolchain for advancing semantic-driven 3D design.
π Abstract
We introduce SldprtNet, a large-scale dataset comprising over 242,000 industrial parts, designed for semantic-driven CAD modeling, geometric deep learning, and the training and fine-tuning of multimodal models for 3D design. The dataset provides 3D models in both .step and .sldprt formats to support diverse training and testing. To enable parametric modeling and facilitate dataset scalability, we developed supporting tools, an encoder and a decoder, which support 13 types of CAD commands and enable lossless transformation between 3D models and a structured text representation. Additionally, each sample is paired with a composite image created by merging seven rendered views from different viewpoints of the 3D model, effectively reducing input token length and accelerating inference. By combining this image with the parameterized text output from the encoder, we employ the lightweight multimodal language model Qwen2.5-VL-7B to generate a natural language description of each part's appearance and functionality. To ensure accuracy, we manually verified and aligned the generated descriptions, rendered images, and 3D models. These descriptions, along with the parameterized modeling scripts, rendered images, and 3D model files, are fully aligned to construct SldprtNet. To assess its effectiveness, we fine-tuned baseline models on a dataset subset, comparing image-plus-text inputs with text-only inputs. Results confirm the necessity and value of multimodal datasets for CAD generation. It features carefully selected real-world industrial parts, supporting tools for scalable dataset expansion, diverse modalities, and ensured diversity in model complexity and geometric features, making it a comprehensive multimodal dataset built for semantic-driven CAD modeling and cross-modal learning.