Bi-LAT: Bilateral Control-Based Imitation Learning via Natural Language and Action Chunking with Transformers

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of fine-grained force control and unintuitive human–robot interaction in robotic dexterous manipulation, this paper proposes a semantics-driven bilateral force modulation framework. Methodologically, we introduce the first joint modeling of natural language instructions (e.g., “gently grasp the cup”) with bilateral teleoperation force/motion signals via a multimodal Transformer architecture, integrating the SigLIP language encoder, action tokenization, and fused perception of joint position, velocity, and torque. Our key contributions include: (i) the first end-to-end mapping from linguistic intent to force-level control, enabling real-time, interpretable, and bimanual force modulation; and (ii) empirical validation on single-hand cup-stacking and dual-hand sponge-squeezing tasks, where multi-level force instructions are accurately reproduced. SigLIP significantly improves language–force alignment accuracy, demonstrating the efficacy of semantic-guided imitation learning for force control.

Technology Category

Application Category

📝 Abstract
We present Bi-LAT, a novel imitation learning framework that unifies bilateral control with natural language processing to achieve precise force modulation in robotic manipulation. Bi-LAT leverages joint position, velocity, and torque data from leader-follower teleoperation while also integrating visual and linguistic cues to dynamically adjust applied force. By encoding human instructions such as"softly grasp the cup"or"strongly twist the sponge"through a multimodal Transformer-based model, Bi-LAT learns to distinguish nuanced force requirements in real-world tasks. We demonstrate Bi-LAT's performance in (1) unimanual cup-stacking scenario where the robot accurately modulates grasp force based on language commands, and (2) bimanual sponge-twisting task that requires coordinated force control. Experimental results show that Bi-LAT effectively reproduces the instructed force levels, particularly when incorporating SigLIP among tested language encoders. Our findings demonstrate the potential of integrating natural language cues into imitation learning, paving the way for more intuitive and adaptive human-robot interaction. For additional material, please visit: https://mertcookimg.github.io/bi-lat/
Problem

Research questions and friction points this paper is trying to address.

Achieving precise force modulation in robotic manipulation
Integrating natural language cues for nuanced force control
Enabling intuitive human-robot interaction via multimodal learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilateral control with NLP integration
Multimodal Transformer-based force modulation
Language-guided precise robotic manipulation
🔎 Similar Papers
No similar papers found.
T
Takumi Kobayashi
School of Engineering Science, The University of Osaka
M
Masato Kobayashi
D3 Center, The University of Osaka; Graduate School of Information Science and Technology, The University of Osaka
T
Thanpimon Buamanee
Graduate School of Information Science and Technology, The University of Osaka
Yuki Uranishi
Yuki Uranishi
The University of Osaka
Computer VisionXRHuman Computer Interaction