RoMe: Row Granularity Access Memory System for Large Language Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the mismatch between large language models’ (LLMs) sequential access to large data blocks (KB–MB scale) and conventional high-bandwidth memory (HBM) cache-line granularity (32 B), this paper proposes the first row-granularity HBM architecture tailored for LLM workloads. It eliminates column addressing, bank groups, and pseudo-channels, elevating the memory access unit from cache lines to full DRAM rows—thereby substantially simplifying memory controller scheduling logic. Leveraging pin multiplexing and multi-channel aggregation, the design achieves a 12.5% bandwidth improvement with near-zero hardware overhead, while significantly reducing timing complexity and control overhead. The core contribution is the establishment of the first co-optimization paradigm linking LLM memory-access characteristics to physical memory hierarchy, enabling systematic, architecture-aware memory system design for AI accelerators.

Technology Category

Application Category

📝 Abstract
Modern HBM-based memory systems have evolved over generations while retaining cache line granularity accesses. Preserving this fine granularity necessitated the introduction of bank groups and pseudo channels. These structures expand timing parameters and control overhead, significantly increasing memory controller scheduling complexity. Large language models (LLMs) now dominate deep learning workloads, streaming contiguous data blocks ranging from several kilobytes to megabytes per operation. In a conventional HBM-based memory system, these transfers are fragmented into hundreds of 32B cache line transactions. This forces the memory controller to employ unnecessarily intricate scheduling, leading to growing inefficiency. To address this problem, we propose RoMe. RoMe accesses DRAM at row granularity and removes columns, bank groups, and pseudo channels from the memory interface. This design simplifies memory scheduling, thereby requiring fewer pins per channel. The freed pins are aggregated to form additional channels, increasing overall bandwidth by 12.5% with minimal extra pins. RoMe demonstrates how memory scheduling logic can be significantly simplified for representative LLM workloads, and presents an alternative approach for next-generation HBM-based memory systems achieving increased bandwidth with minimal hardware overhead.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficiency in HBM memory for LLMs due to fine-grained cache line accesses
Simplifies memory scheduling by removing bank groups and pseudo channels
Increases bandwidth with minimal hardware overhead for large data transfers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Row granularity access replaces cache line transfers
Simplifies memory interface by removing bank groups
Aggregates freed pins to create extra channels
🔎 Similar Papers
No similar papers found.
Hwayong Nam
Hwayong Nam
Seoul National University
Computer ArchitectureDRAMMemory system
S
Seungmin Baek
Seoul National University
J
Jumin Kim
Seoul National University
M
Michael Jaemin Kim
Meta
Jung Ho Ahn
Jung Ho Ahn
Seoul National University
Computer Architecture