🤖 AI Summary
Current image-driven generative AI methods for stylized 3D modeling neglect tactile attributes, yielding models with visually plausible but perceptually inaccurate surface textures. This work introduces the first end-to-end framework that synthesizes 3D-printable texture heightfields from a single input image—preserving both target visual style and physically grounded tactile properties. Methodologically: (1) a diffusion model is fine-tuned on a large-scale tactile texture dataset to generate physically interpretable heightfields; (2) a surface geometry optimization module ensures precise alignment with perceptual tactile features; and (3) psychophysical experiments quantitatively validate tactile fidelity. Evaluations demonstrate that our generated textures significantly outperform conventional image-guided approaches in diversity, printability, and user-perceived realism. To our knowledge, this is the first method enabling joint visual–tactile controllable synthesis of 3D textures.
📝 Abstract
Recent work in Generative AI enables the stylization of 3D models based on image prompts. However, these methods do not incorporate tactile information, leading to designs that lack the expected tactile properties. We present TactStyle, a system that allows creators to stylize 3D models with images while incorporating the expected tactile properties. TactStyle accomplishes this using a modified image-generation model fine-tuned to generate heightfields for given surface textures. By optimizing 3D model surfaces to embody a generated texture, TactStyle creates models that match the desired style and replicate the tactile experience. We utilize a large-scale dataset of textures to train our texture generation model. In a psychophysical experiment, we evaluate the tactile qualities of a set of 3D-printed original textures and TactStyle's generated textures. Our results show that TactStyle successfully generates a wide range of tactile features from a single image input, enabling a novel approach to haptic design.