Efficient Model Agnostic Approach for Implicit Neural Representation Based Arbitrary-Scale Image Super-Resolution

📅 2023-11-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing implicit neural methods for arbitrary-scale image super-resolution (SR) struggle to balance reconstruction fidelity and computational efficiency. To address this, we propose MoEISR—a Mixture-of-Experts-based implicit SR framework. First, it employs implicit neural representations to bypass fixed scaling-factor constraints, enabling continuous-scale SR. Second, it introduces a lightweight mapper-driven pixel-wise dynamic routing mechanism that adaptively assigns heterogeneous expert networks based on local region complexity, facilitating collaborative decoding. Third, it incorporates multi-scale feature disentanglement to enhance representational capacity. Compared to state-of-the-art implicit approaches, MoEISR reduces FLOPs by up to 73% while maintaining or surpassing their PSNR performance. Crucially, it achieves consistent high-fidelity reconstruction and computational efficiency across arbitrary scaling factors—unifying quality and speed without scale-specific design.
📝 Abstract
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks. Traditional networks, however, are limited to upscaling images to a fixed scale, leading to the utilization of implicit neural functions for generating arbitrarily scaled images. Nevertheless, these methodologies have imposed substantial computational demands as they involve querying every target pixel to a single resource-intensive decoder. In this paper, we introduce a novel and efficient framework, the Mixture of Experts Implicit Super-Resolution (MoEISR), which enables super-resolution at arbitrary scales with significantly increased computational efficiency without sacrificing reconstruction quality. MoEISR dynamically allocates the most suitable decoding expert to each pixel using a lightweight mapper module, allowing experts with varying capacities to reconstruct pixels across regions with diverse complexities. Our experiments demonstrate that MoEISR successfully reduces up to 73% in floating point operations (FLOPs) while delivering comparable or superior peak signal-to-noise ratio (PSNR).
Problem

Research questions and friction points this paper is trying to address.

Achieving arbitrary-scale image super-resolution with single model efficiency
Reducing computational demands of implicit neural super-resolution methods
Maintaining reconstruction quality while significantly decreasing FLOPs operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic expert allocation for pixel decoding
Lightweight mapper module for efficient routing
Mixture-of-experts framework reducing computational costs
🔎 Similar Papers
No similar papers found.
Y
Young Jae Oh
Hanyang University, Seoul, South Korea
J
Jihun Kim
Hanyang University, Seoul, South Korea
Tae Hyun Kim
Tae Hyun Kim
Dept. of Computer Science, Hanyang University
Computational ImagingComputer VisionMachine Learning