🤖 AI Summary
Object removal requires high-fidelity reconstruction of occluded background regions without regenerating the target object; however, existing data-free methods suffer from erroneous foreground-background confusion due to self-attention redirection—mistaking distractor foregrounds as background—and consequent loss of detail coherence. This paper proposes the first training-free object removal framework: it leverages a multimodal large language model (MLLM) for single-image mask parsing to precisely discriminate target foreground, distractor foreground, and clean background. We introduce a background-aware foreground exclusion mechanism and a subtype-aggregated reconstruction strategy, eliminating destructive attention operations. Further, we incorporate test-time adaptive optimization and a fine-grained–global consistency alignment objective. The method is plug-and-play, significantly outperforming prior data-free approaches across multiple benchmarks while matching supervised methods in both local texture fidelity and global structural integrity.
📝 Abstract
Object removal differs from common inpainting, since it must prevent the masked target from reappearing and reconstruct the occluded background with structural and contextual fidelity, rather than merely filling a hole plausibly. Recent dataset-free approaches that redirect self-attention inside the mask fail in two ways: non-target foregrounds are often misinterpreted as background, which regenerates unwanted objects, and direct attention manipulation disrupts fine details and hinders coherent integration of background cues. We propose EraseLoRA, a novel dataset-free framework that replaces attention surgery with background-aware reasoning and test-time adaptation. First, Background-aware Foreground Exclusion (BFE), uses a multimodal large-language models to separate target foreground, non-target foregrounds, and clean background from a single image-mask pair without paired supervision, producing reliable background cues while excluding distractors. Second, Background-aware Reconstruction with Subtype Aggregation (BRSA), performs test-time optimization that treats inferred background subtypes as complementary pieces and enforces their consistent integration through reconstruction and alignment objectives, preserving local detail and global structure without explicit attention intervention. We validate EraseLoRA as a plug-in to pretrained diffusion models and across benchmarks for object removal, demonstrating consistent improvements over dataset-free baselines and competitive results against dataset-driven methods. The code will be made available upon publication.