🤖 AI Summary
Emerging ultra-low-latency applications—such as AR/VR and cloud gaming—in multi-access networks (e.g., V2X, LEO satellite, and 6G) face severe challenges in mobility management due to high terminal dynamics. Method: This paper proposes an end-to-end identifier/locator separation architecture featuring a novel tree-embedded coverage structure, enabling autonomous cross-network mobility without centralized anchors or specialized hardware. It integrates distributed location indexing with multi-access adaptive routing. Contribution/Results: Experimental evaluation shows the design incurs only a 7.42% end-to-end latency overhead over the shortest path—dramatically outperforming LISP’s 359% overhead—while significantly reducing location update cost and handover interruption time. The architecture achieves strong multi-access compatibility, low signaling overhead, and sub-10-ms latency scalability. This work establishes a lightweight, scalable network mapping paradigm tailored for ultra-low-latency, highly mobile environments.
📝 Abstract
Low-latency applications like AR/VR and online gaming need fast, stable connections. New technologies such as V2X, LEO satellites, and 6G bring unique challenges in mobility management. Traditional solutions based on centralized or distributed anchors often fall short in supporting rapid mobility due to inefficient routing, low versatility, and insufficient multi-access support. In this paper, we design a new end-to-end system for tracking multi-connected mobile devices at scale and optimizing performance for latency-sensitive, highly dynamic applications. Our system, based on the locator/ID separation principle, extends to multi-access networks without requiring specialized routers or caching. Using a novel tree embedding-based overlay, we enable fast session setup while allowing endpoints to directly handle mobility between them. Evaluation with real network data shows our solution cuts connection latency to 7.42% inflation over the shortest path, compared to LISP's 359% due to cache misses. It also significantly reduces location update overhead and disruption time during mobility.