Surformer v1: Transformer-Based Surface Classification Using Tactile and Vision Features

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient efficiency and accuracy of multimodal perception in robotic surface material recognition, this paper proposes Surformer v1—a tactile-visual fusion model based on a pure-encoder Transformer architecture. Methodologically, it employs modality-specific encoders: one processes structured tactile signals, while the other handles visual embeddings extracted by ResNet-50 and compressed via PCA. A cross-modal attention mechanism is introduced to enable fine-grained feature alignment and fusion. The key contribution lies in being the first to jointly integrate structured tactile representations and dimensionality-reduced visual embeddings into a Transformer framework, thereby preserving physical interpretability while enhancing computational efficiency. Experiments on standard benchmarks achieve 99.4% classification accuracy with only 0.77 ms inference latency per sample—significantly outperforming state-of-the-art multimodal CNNs in accuracy, ultra-low latency, and real-time capability.

Technology Category

Application Category

📝 Abstract
Surface material recognition is a key component in robotic perception and physical interaction, particularly when leveraging both tactile and visual sensory inputs. In this work, we propose Surformer v1, a transformer-based architecture designed for surface classification using structured tactile features and PCA-reduced visual embeddings extracted via ResNet-50. The model integrates modality-specific encoders with cross-modal attention layers, enabling rich interactions between vision and touch. Currently, state-of-the-art deep learning models for vision tasks have achieved remarkable performance. With this in mind, our first set of experiments focused exclusively on tactile-only surface classification. Using feature engineering, we trained and evaluated multiple machine learning models, assessing their accuracy and inference time. We then implemented an encoder-only Transformer model tailored for tactile features. This model not only achieved the highest accuracy but also demonstrated significantly faster inference time compared to other evaluated models, highlighting its potential for real-time applications. To extend this investigation, we introduced a multimodal fusion setup by combining vision and tactile inputs. We trained both Surformer v1 (using structured features) and Multimodal CNN (using raw images) to examine the impact of feature-based versus image-based multimodal learning on classification accuracy and computational efficiency. The results showed that Surformer v1 achieved 99.4% accuracy with an inference time of 0.77 ms, while the Multimodal CNN achieved slightly higher accuracy but required significantly more inference time. These findings suggest Surformer v1 offers a compelling balance between accuracy, efficiency, and computational cost for surface material recognition.
Problem

Research questions and friction points this paper is trying to address.

Classify surface materials using tactile and vision inputs
Integrate cross-modal attention for vision-touch interaction
Balance accuracy and efficiency in real-time applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based architecture for surface classification
Integrates tactile and vision features with cross-modal attention
PCA-reduced visual embeddings extracted via ResNet-50
🔎 Similar Papers
No similar papers found.