🤖 AI Summary
This work addresses the challenge of applying deep learning to boundary-representation (B-rep) CAD models, whose irregular topology and continuous geometric definitions hinder conventional neural modeling. We introduce the first Transformer-based framework for B-rep understanding. Our method employs continuous geometric embedding—encoding parametric surfaces via Bézier triangles—and a topology-aware serialization scheme, enabling end-to-end, discretization-free joint modeling of geometry and topology. By circumventing the reliance on meshing, voxelization, or graph construction inherent in GNNs and CNNs, our approach achieves superior fidelity to native B-rep structure. Evaluated on part classification and CAD feature recognition, it attains state-of-the-art performance, significantly outperforming mainstream baselines. This establishes a novel paradigm for deep learning on parameterized CAD models.
📝 Abstract
The recent rise of generative artificial intelligence (AI), powered by Transformer networks, has achieved remarkable success in natural language processing, computer vision, and graphics. However, the application of Transformers in computer-aided design (CAD), particularly for processing boundary representation (B-rep) models, remains largely unexplored. To bridge this gap, this paper introduces Boundary Representation Transformer (BRT), a novel method adapting Transformer for B-rep learning. B-rep models pose unique challenges due to their irregular topology and continuous geometric definitions, which are fundamentally different from the structured and discrete data Transformers are designed for. To address this, BRT proposes a continuous geometric embedding method that encodes B-rep surfaces (trimmed and untrimmed) into B'ezier triangles, preserving their shape and continuity without discretization. Additionally, BRT employs a topology-aware embedding method that organizes these geometric embeddings into a sequence of discrete tokens suitable for Transformers, capturing both geometric and topological characteristics within B-rep models. This enables the Transformer's attention mechanism to effectively learn shape patterns and contextual semantics of boundary elements in a B-rep model. Extensive experiments demonstrate that BRT achieves state-of-the-art performance in part classification and feature recognition tasks.