Hogwild! Inference: Parallel LLM Generation via Concurrent Attention

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low parallel efficiency and limited task applicability of existing parallel frameworks for long-sequence inference in large language models (LLMs). We propose a fine-tuning-free parallel inference paradigm that enables multi-instance sharing of key-value (KV) caches. Methodologically, we design a concurrently updatable attention cache based on Rotary Position Embedding (RoPE), supporting real-time token visibility and dynamic, self-coordinated collaboration across instances; memory access follows a lock-free Hogwild!-style scheme to eliminate synchronization overhead. Our core contribution is the first realization of cross-instance KV cache sharing with autonomous, strategy-driven coordination—bypassing rigid, predefined collaboration mechanisms (e.g., voting or divide-and-conquer). Experiments demonstrate substantial improvements in hardware utilization and inference throughput, while maintaining full compatibility with mainstream inference-optimized LLMs—requiring no architectural modifications or additional training.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated the ability to tackle increasingly complex tasks through advanced reasoning, long-form content generation, and tool use. Solving these tasks often involves long inference-time computations. In human problem solving, a common strategy to expedite work is collaboration: by dividing the problem into sub-tasks, exploring different strategies concurrently, etc. Recent research has shown that LLMs can also operate in parallel by implementing explicit cooperation frameworks, such as voting mechanisms or the explicit creation of independent sub-tasks that can be executed in parallel. However, each of these frameworks may not be suitable for all types of tasks, which can hinder their applicability. In this work, we propose a different design approach: we run LLM"workers"in parallel , allowing them to synchronize via a concurrently-updated attention cache and prompt these workers to decide how best to collaborate. Our approach allows the instances to come up with their own collaboration strategy for the problem at hand, all the while"seeing"each other's partial progress in the concurrent cache. We implement this approach via Hogwild! Inference: a parallel LLM inference engine where multiple instances of the same LLM run in parallel with the same attention cache, with"instant"access to each other's generated tokens. Hogwild! inference takes advantage of Rotary Position Embeddings (RoPE) to avoid recomputation while improving parallel hardware utilization. We find that modern reasoning-capable LLMs can perform inference with shared Key-Value cache out of the box, without additional fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Parallel LLM generation via concurrent attention synchronization
Improving inference speed through collaborative worker strategies
Shared Key-Value cache utilization without fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel LLM workers synchronize via shared attention cache
Hogwild! Inference uses concurrent cache for instant token access
RoPE embeddings optimize parallel hardware utilization
🔎 Similar Papers
No similar papers found.
G
Gleb Rodionov
Yandex
R
Roman Garipov
HSE University, Yandex
A
Alina Shutova
HSE University, Yandex
G
George Yakushev
HSE University, Yandex
Vage Egiazarian
Vage Egiazarian
ISTA
Deep LearningMachine LearningNeural Networks
A
Anton Sinitsin
Yandex
D
Denis Kuznedelev
Yandex
Dan Alistarh
Dan Alistarh
Professor at IST Austria
Machine LearningAlgorithmsDistributed Computing