LUMION: Fast Fault Recovery for ML Jobs Using Programmable Optical Fabrics

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address resource waste caused by rack-level migration upon accelerator failures in ML data centers, this paper proposes a dynamic fault-tolerant architecture leveraging programmable optical interconnects. Our approach introduces the first intra-rack, fine-grained optical interconnect reconfiguration mechanism tailored for ML workloads, enabling GPU hot-swapping and seamless task recovery within seconds. The architecture integrates a silicon photonics switch array, a low-overhead optical path reconfiguration protocol, GPU failure detection, and a coordinated restart framework, culminating in an end-to-end hardware prototype. Experimental evaluation on Llama 3.2 fine-tuning shows fault response and recovery time of approximately one second; post-replacement GPU-to-GPU bandwidth exceeds conventional electrical interconnects, yielding nearly 2× throughput improvement and significantly reducing fault-tolerance redundancy overhead.

Technology Category

Application Category

📝 Abstract
When accelerators fail in modern ML datacenters, operators migrate the affected ML training or inference jobs to entirely new racks. This approach, while preserving network performance, is highly inefficient, requiring datacenters to reserve full racks of idle accelerators for fault tolerance. In this paper, we address this resource inefficiency by introducing LUMION, a novel reconfigurable optical fabric for connecting accelerators within a datacenter rack. Instead of migrating entire ML jobs, LUMION dynamically integrates spare accelerators into ongoing workloads as failures occur, thereby maintaining consistent performance without costly migrations. We show the benefits of LUMION by building an end-to-end hardware prototype. Our experiments fine-tune Llama 3.2 and show that LUMION swaps a failed GPU with a healthy one and restarts the ML job within ~ 1 second of the failure. LUMION achieves higher inter-GPU bandwidth compared to traditional electrical racks after replacing failed accelerators with spare ones, leading to nearly 2X improvement in fine-tuning throughput.
Problem

Research questions and friction points this paper is trying to address.

Inefficient resource usage in ML datacenters during accelerator failures
Costly job migrations due to lack of dynamic spare integration
Performance degradation after GPU failures in traditional electrical racks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconfigurable optical fabric for accelerators
Dynamic integration of spare accelerators
Fast GPU replacement within 1 second
🔎 Similar Papers
No similar papers found.