π€ AI Summary
Early Peking Opera videos suffer from low resolution, low frame rate, and motion blur due to historical hardware limitations; existing spatiotemporal video super-resolution (STVSR) methods struggle to model large-scale, expressive opera motions and lack domain-specific training data. To address this, we propose a Mamba-based multi-scale STVSR framework: (1) we introduce COVCβthe first large-scale Peking Opera video dataset; (2) we design a Global Fusion Module (GFM) employing multi-scale alternating scanning to capture long-range motion dependencies; and (3) we integrate a Multi-Scale Collaborative Mamba Module (MSMM) with a MambaVR feature rectification mechanism to enhance inter-frame alignment accuracy and high-frequency detail recovery. Evaluated on COVC, our method achieves a 1.86 dB PSNR gain over state-of-the-art STVSR approaches, markedly improving visual quality in large-motion scenarios. This work establishes a novel paradigm for digital archiving of traditional Chinese opera.
π Abstract
Chinese opera is celebrated for preserving classical art. However, early filming equipment limitations have degraded videos of last-century performances by renowned artists (e.g., low frame rates and resolution), hindering archival efforts. Although space-time video super-resolution (STVSR) has advanced significantly, applying it directly to opera videos remains challenging. The scarcity of datasets impedes the recovery of high frequency details, and existing STVSR methods lack global modeling capabilities, compromising visual quality when handling opera's characteristic large motions. To address these challenges, we pioneer a large scale Chinese Opera Video Clip (COVC) dataset and propose the Mamba-based multiscale fusion network for space-time Opera Video Super-Resolution (MambaOVSR). Specifically, MambaOVSR involves three novel components: the Global Fusion Module (GFM) for motion modeling through a multiscale alternating scanning mechanism, and the Multiscale Synergistic Mamba Module (MSMM) for alignment across different sequence lengths. Additionally, our MambaVR block resolves feature artifacts and positional information loss during alignment. Experimental results on the COVC dataset show that MambaOVSR significantly outperforms the SOTA STVSR method by an average of 1.86 dB in terms of PSNR. Dataset and Code will be publicly released.