Larger than memory image processing

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses memory exhaustion and I/O bottlenecks in processing petabyte-scale image datasets—such as 1.4 PB electron microscopy volumes or 150 TB organ atlases—by introducing a streaming single-pass architecture based on a sweep execution model. The approach aligns disk reads with a one-dimensional sweep order and combines windowed operations with overlap-aware tiling to enable efficient processing under tight memory constraints. A domain-specific language (DSL) is designed to automatically optimize window sizes, fuse pipeline stages, and schedule multi-pass sweeps at compile time and runtime. The system supports Zarr, HDF5, and slice-based formats without requiring full-image residency in memory, achieving significantly higher throughput, near-linear I/O scaling, and predictable memory usage while seamlessly integrating with existing segmentation and morphological analysis toolchains.

Technology Category

Application Category

📝 Abstract
This report addresses larger-than-memory image analysis for petascale datasets such as 1.4 PB electron-microscopy volumes and 150 TB human-organ atlases. We argue that performance is fundamentally I/O-bound. We show that structuring analysis as streaming passes over data is crucial. For 3D volumes, two representations are popular: stacks of 2D slices (e.g., directories or multi-page TIFF) and 3D chunked layouts (e.g., Zarr/HDF5). While for a few algorithms, chunked layout on disk is crucial to keep disk I/O at a minimum, we show how the slice-based streaming architecture can be built on top of either image representation in a manner that minimizes disk I/O. This is in particular advantageous for algorithms relying on neighbouring values, since the slicing streaming architecture is 1D, which implies that there are only 2 possible sweeping orders, both of which are aligned with the order in which images are read from the disk. This is in contrast to 3D chunks, in which any sweep cannot be done without accessing each chunk at least 9 times. We formalize this with sweep-based execution (natural 2D/3D orders), windowed operations, and overlap-aware tiling to minimize redundant access. Building on these principles, we introduce a domain-specific language (DSL) that encodes algorithms with intrinsic knowledge of their optimal streaming and memory use; the DSL performs compile-time and run-time pipeline analyses to automatically select window sizes, fuse stages, tee and zip streams, and schedule passes for limited-RAM machines, yielding near-linear I/O scans and predictable memory footprints. The approach integrates with existing tooling for segmentation and morphology but reframes pre/post-processing as pipelines that privilege sequential read/write patterns, delivering substantial throughput gains for extremely large images without requiring full-volume residency in memory.
Problem

Research questions and friction points this paper is trying to address.

larger-than-memory
image processing
I/O-bound
petascale datasets
streaming architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

streaming architecture
out-of-core processing
domain-specific language
I/O optimization
sweep-based execution
🔎 Similar Papers
No similar papers found.
Jon Sporring
Jon Sporring
University of Copenhagen
Image processingMachine learningMicroscopy
D
David Stansby
Department of Mechanical Engineering, University College London, London, United Kingdom