🤖 AI Summary
This work proposes a novel web browsing paradigm that integrates mobile devices with augmented reality (AR) to overcome the limitations imposed by constrained screen real estate. Leveraging the structural properties of web pages, the approach enables users to dynamically project selected content elements into their surrounding physical space, with control over what to project, when to project it, and how to arrange it spatially. This facilitates task-oriented, personalized information organization and interaction. A fully functional prototype is implemented using standard web technologies, supporting an end-to-end workflow from content selection on mobile devices to spatial manipulation in AR. Preliminary user studies indicate that the system is easy to learn and use, while also highlighting key design challenges, such as the activation of projection modes.
📝 Abstract
Browsing the Web on mobile devices is often cumbersome due to their limited screen space. We investigate a phone+AR Web browsing approach, AiRWeb, that leverages the structural properties of Web pages to allow users to seamlessly select and offload arbitrary Web content into the space surrounding them. Focusing on flexibility, AiRWeb lets users decide what to offload, when to do so, and how offloaded content is arranged, enabling personalized organization tailored to the task at hand. We developed a fully functional prototype using standard Web technologies, that covers the complete interaction workflow, from the selection of elements to offload from the phone to their manipulation in the air. Results from a preliminary study conducted using this prototype suggest that AiRWeb is learnable and usable, while also revealing open design challenges around offload mode activation in particular.