🤖 AI Summary
Constructing RLSLP-based indexes for large-scale repetitive texts suffers from high time and memory overhead during grammar compression. Method: This paper proposes the first practical RLSLP construction algorithm with improved compression-time complexity. Its core innovation lies in efficient recompression leveraging an LZ77 approximation, integrated with syntax-directed optimization and direct computation within the compressed domain—eliminating costly decompression. Contribution/Results: Compared to state-of-the-art uncompressed indexing methods, our approach achieves up to 46× speedup and 17× memory reduction on large repetitive corpora. It is the first method to translate the theoretically efficient RLSLP indexing framework into a scalable, terabyte-level practical solution, enabling real-world deployment on massive repetitive datasets.
📝 Abstract
Compressed indexing enables powerful queries over massive and repetitive textual datasets using space proportional to the compressed input. While theoretical advances have led to highly efficient index structures, their practical construction remains a bottleneck (especially for complex components like recompression RLSLP), a grammar-based representation crucial for building powerful text indexes that support widely used suffix array queries. In this work, we present the first implementation of recompression RLSLP construction that runs in compressed time, operating on an LZ77-like approximation of the input. Compared to state-of-the-art uncompressed-time methods, our approach achieves up to 46$ imes$ speedup and 17$ imes$ lower RAM usage on large, repetitive inputs. These gains unlock scalability to larger datasets and affirm compressed computation as a practical path forward for fast index construction.