SpeechLess: Micro-utterance with Personalized Spatial Memory-aware Assistant in Everyday Augmented Reality

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the social awkwardness and expressive burden associated with frequent use of full verbal utterances when interacting with wearable AR assistants in public settings. The authors propose a novel paradigm that integrates personalized spatial memory with fine-grained control over speech-based intent expression. By dynamically binding users’ historical interactions to multimodal contextual cues—including spatial location, time, ongoing activity, and referents—the system constructs a dynamic spatial memory model capable of accurately inferring user intent from highly abbreviated or even zero-speech inputs. This approach enables a continuum of interaction modalities, ranging from full sentences to minimal or no speech, significantly reducing expressive effort and improving information retrieval efficiency across diverse real-world scenarios while maintaining high intent inference accuracy and social acceptability.

Technology Category

Application Category

📝 Abstract
Speaking aloud to a wearable AR assistant in public can be socially awkward, and re-articulating the same requests every day creates unnecessary effort. We present SpeechLess, a wearable AR assistant that introduces a speech-based intent granularity control paradigm grounded in personalized spatial memory. SpeechLess helps users"speak less,"while still obtaining the information they need, and supports gradual explicitation of intent when more complex expression is required. SpeechLess binds prior interactions to multimodal personal context-space, time, activity, and referents-to form spatial memories, and leverages them to extrapolate missing intent dimensions from under-specified user queries. This enables users to dynamically adjust how explicitly they express their informational needs, from full-utterance to micro/zero-utterance interaction. We motivate our design through a week-long formative study using a commercial smart glasses platform, revealing discomfort with public voice use, frustration with repetitive speech, and hardware constraints. Building on these insights, we design SpeechLess, and evaluate it through controlled lab and in-the-wild studies. Our results indicate that regulated speech-based interaction, can improve everyday information access, reduce articulation effort, and support socially acceptable use without substantially degrading perceived usability or intent resolution accuracy across diverse everyday environments.
Problem

Research questions and friction points this paper is trying to address.

augmented reality
speech interaction
social awkwardness
repetitive utterance
wearable assistant
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatial memory
micro-utterance
intent granularity
context-aware AR
speechless interaction
🔎 Similar Papers
No similar papers found.