SkyMemory: A LEO Edge Cache for Transformer Inference Optimization and Scale Out

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address narrow LLM inference cache coverage, high cross-domain access latency, and low hit rates in LEO satellite constellations, this paper proposes the first globally designed key-value caching (KVC) protocol tailored for LEO constellations. It leverages inter-satellite free-space optical (FSO) links to establish a single-hop-accessible distributed edge caching network. Methodologically, the approach integrates FSO/inter-satellite link (ISL) communication, Transformer key-value (KV) cache compression, and lightweight Linux-based embedded cluster deployment (Intel NUC + Jetson Nano). Its key innovation lies in enabling cross-satellite collaborative cache management, thereby overcoming geographical and latency constraints inherent in terrestrial edge caching. Simulation results demonstrate a 37% improvement in cache hit rate and a 42% reduction in end-to-end inference latency. A five-node prototype system delivers real-time LLM caching services under a virtual constellation scale of 19×5 satellites.

Technology Category

Application Category

📝 Abstract
We expand the scope of cache memory to include LEO constellations, which are highly distributed systems with thousands of satellites connected with free-space optics inter-satellite links (ISL) always only one hop from any point on earth. We show how to increase the number of cache hits and improve the speed of inference for the important use case of LLMs. These benefits apply not only to LLMs, both terrestrially hosted and on satellites, but also generalize to any cache distributed over multiple locations that needs to be accessed in a timely manner. We show the benefit of our key value cache (KVC) protocol in simulations and present a proof-of-concept implementation of the protocol for KVCs on a testbed comprising 5 Intel NUC Linux mini PCs hosting a 19x5 constellation, with an NVIDIA Jetson Nano 8GB GPU hosting the LLM.
Problem

Research questions and friction points this paper is trying to address.

Optimizing transformer inference in LEO satellite caches
Increasing cache hits for distributed LLM systems
Enhancing timely access for multi-location key-value caches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends cache memory to LEO satellite constellations
Introduces key value cache protocol for faster inference
Simulates and tests on a 5-node mini PC testbed
🔎 Similar Papers
No similar papers found.