🤖 AI Summary
This paper studies the Longest Palindromic Substring (LPS) problem in the Massively Parallel Computation (MPC) model. We propose the first constant-round (O(1)-round), randomized algorithm that, with high probability, correctly identifies all maximal palindromic substrings. Our method overcomes the restrictive assumption ε ∈ (0, 0.5] required in prior adaptive MPC frameworks, extending it to the full range ε ∈ (0, 1). The algorithm achieves optimal total time and global space complexity of O(n), while using only O(n^{1−ε}) local memory per machine—matching known lower bounds. Technically, we integrate rolling hash-based string comparison, parallel center expansion, and adaptive load balancing to ensure correctness and tight asymptotic bounds. This advancement significantly broadens the applicability of the MPC model to string problems and establishes new state-of-the-art efficiency for LPS computation in massively parallel settings.
📝 Abstract
In the classical longest palindromic substring (LPS) problem, we are given a string $S$ of length $n$, and the task is to output a longest palindromic substring in $S$. Gilbert, Hajiaghayi, Saleh, and Seddighin [SPAA 2023] showed how to solve the LPS problem in the Massively Parallel Computation (MPC) model in $mathcal{O}(1)$ rounds using $mathcal{widetilde{O}}(n)$ total memory, with $mathcal{widetilde{O}}(n^{1-ε})$ memory per machine, for any $εin (0,0.5]$.
We present a simple and optimal algorithm to solve the LPS problem in the MPC model in $mathcal{O}(1)$ rounds. The total time and memory are $mathcal{O}(n)$, with $mathcal{O}(n^{1-ε})$ memory per machine, for any $εin (0,0.5]$. A key attribute of our algorithm is its ability to compute all maximal palindromes in the same complexities. Furthermore, our new insights allow us to bypass the constraint $εin (0,0.5]$ in the Adaptive MPC model. Our algorithms and the one proposed by Gilbert et al. for the LPS problem are randomized and succeed with high probability.