π€ AI Summary
To address the high development barrier, poor cross-platform compatibility, inefficient symmetry handling, and suboptimal contraction optimization in tensor network algorithms, this paper introduces Cytnxβan open-source library. Cytnx features a unified C++/Python dual-backend interface with syntax compatibility to NumPy and PyTorch. It proposes a novel dynamic-graph-based automatic contraction scheduler built upon a *Network* abstraction, enabling native support for mixed storage and efficient computation with multiple global Abelian symmetries. Furthermore, it integrates NVIDIA cuQuantum for GPU acceleration. Experimental results demonstrate that Cytnx significantly outperforms manual contraction schemes on both CPU and GPU platforms, drastically reducing implementation complexity. As a result, Cytnx provides a high-performance, user-friendly, and extensible tensor network computing infrastructure for classical and quantum many-body physics simulations.
π Abstract
We introduce a tensor network library designed for classical and quantum physics simulations called Cytnx (pronounced as sci-tens). This library provides almost an identical interface and syntax for both C++ and Python, allowing users to effortlessly switch between two languages. Aiming at a quick learning process for new users of tensor network algorithms, the interfaces resemble the popular Python scientific libraries like NumPy, Scipy, and PyTorch. Not only multiple global Abelian symmetries can be easily defined and implemented, Cytnx also provides a new tool called Network that allows users to store large tensor networks and perform tensor network contractions in an optimal order automatically. With the integration of cuQuantum, tensor calculations can also be executed efficiently on GPUs. We present benchmark results for tensor operations on both devices, CPU and GPU. We also discuss features and higher-level interfaces to be added in the future.