MemOS: A Memory OS for AI System

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack a unified memory management framework, hindering long-context reasoning, continual personalization, and knowledge consistency. Existing approaches—relying on static parameters, transient context windows, or stateless retrieval-augmented generation (RAG)—fail to support heterogeneous knowledge co-evolution across temporal scales and sources. Method: We propose MemOS, a memory operating system for LLMs, introducing the MemCube as a foundational memory abstraction that unifies explicit, activation-level, and parameter-level memory representations, enabling composition, migration, and fusion. MemOS employs hierarchical modeling integrating retrieval, activation control, and parameter updates to realize a schedulable, evolvable, lifecycle-aware memory management architecture. Results: Experiments demonstrate that MemOS significantly reduces training and inference overhead, improves knowledge update efficiency and context management capability, and provides a scalable, controllable memory infrastructure for continual learning and personalized modeling.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become an essential infrastructure for Artificial General Intelligence (AGI), yet their lack of well-defined memory management systems hinders the development of long-context reasoning, continual personalization, and knowledge consistency.Existing models mainly rely on static parameters and short-lived contextual states, limiting their ability to track user preferences or update knowledge over extended periods.While Retrieval-Augmented Generation (RAG) introduces external knowledge in plain text, it remains a stateless workaround without lifecycle control or integration with persistent representations.Recent work has modeled the training and inference cost of LLMs from a memory hierarchy perspective, showing that introducing an explicit memory layer between parameter memory and external retrieval can substantially reduce these costs by externalizing specific knowledge. Beyond computational efficiency, LLMs face broader challenges arising from how information is distributed over time and context, requiring systems capable of managing heterogeneous knowledge spanning different temporal scales and sources. To address this challenge, we propose MemOS, a memory operating system that treats memory as a manageable system resource. It unifies the representation, scheduling, and evolution of plaintext, activation-based, and parameter-level memories, enabling cost-efficient storage and retrieval. As the basic unit, a MemCube encapsulates both memory content and metadata such as provenance and versioning. MemCubes can be composed, migrated, and fused over time, enabling flexible transitions between memory types and bridging retrieval with parameter-based learning. MemOS establishes a memory-centric system framework that brings controllability, plasticity, and evolvability to LLMs, laying the foundation for continual learning and personalized modeling.
Problem

Research questions and friction points this paper is trying to address.

Lack of memory management in LLMs hinders long-context reasoning.
Static parameters limit tracking user preferences over time.
Current systems fail to manage heterogeneous knowledge efficiently.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MemOS as a memory operating system
Unifies representation and scheduling of memory types
Uses MemCubes for flexible memory management
🔎 Similar Papers
No similar papers found.
Zhiyu Li
Zhiyu Li
Tianjin University
Robust controlattitude control
S
Shichao Song
MemTensor (Shanghai) Technology Co., Ltd.
Chenyang Xi
Chenyang Xi
Beijing Institute of Technology
Reinforcement Learning
H
Hanyu Wang
MemTensor (Shanghai) Technology Co., Ltd.
C
Chen Tang
MemTensor (Shanghai) Technology Co., Ltd.
S
Simin Niu
MemTensor (Shanghai) Technology Co., Ltd.
Ding Chen
Ding Chen
Postdoctoral Scholar, University of Texas Southwestern Medical Center
J
Jiawei Yang
MemTensor (Shanghai) Technology Co., Ltd.
C
Chunyu Li
MemTensor (Shanghai) Technology Co., Ltd.
Q
Qingchen Yu
MemTensor (Shanghai) Technology Co., Ltd.
J
Jihao Zhao
MemTensor (Shanghai) Technology Co., Ltd.
Y
Yezhaohui Wang
MemTensor (Shanghai) Technology Co., Ltd.
P
Peng Liu
Renmin University of China
Z
Zehao Lin
MemTensor (Shanghai) Technology Co., Ltd.
P
Pengyuan Wang
MemTensor (Shanghai) Technology Co., Ltd.
Jiahao Huo
Jiahao Huo
Tongji University
Multimodal AIInterpretabilityNatural Language Processing
T
Tianyi Chen
MemTensor (Shanghai) Technology Co., Ltd.
K
Kai Chen
MemTensor (Shanghai) Technology Co., Ltd.
K
Kehang Li
MemTensor (Shanghai) Technology Co., Ltd.
Zhen Tao
Zhen Tao
Technical University of Munich
Usable Privacy and SecuritySoftware Engineering
J
Junpeng Ren
MemTensor (Shanghai) Technology Co., Ltd.
H
Huayi Lai
MemTensor (Shanghai) Technology Co., Ltd.
H
Hao Wu
MemTensor (Shanghai) Technology Co., Ltd.
B
Bo Tang
MemTensor (Shanghai) Technology Co., Ltd.
Zhengren Wang
Zhengren Wang
Peking University
Generative AIRetrieval-Augmented GenerationCombinatorial optimization