🤖 AI Summary
This study addresses the challenge of accurately estimating the weight of commercial and industrial waste from images, where visual similarity coupled with substantial density variations and distance-dependent scale distortions hinders reliable prediction. To overcome this, the authors propose a physics-informed multimodal fusion model that integrates RGB images with metadata—including object dimensions, camera distance, and height. The architecture employs a Vision Transformer for visual feature extraction, a dedicated encoder for geometric and categorical information, and a Stacked Mutual Attention Fusion mechanism to effectively combine modalities, trained with a Mean Squared Logarithmic Error (MSLE) loss. Key contributions include Waste-Weight-10K, the first large-scale real-world waste weight dataset; an interpretable prediction module; and state-of-the-art performance, achieving 88.06 kg MAE, 6.39% MAPE, and 0.9548 R² on the test set, with a notably low MAE of 2.38 kg for lightweight waste (0–100 kg).
📝 Abstract
Accurate weight estimation of commercial and industrial waste is important for efficient operations, yet image-based estimation remains difficult because similar-looking objects may have different densities, and the visible size changes with camera distance. Addressing this problem, we propose Multimodal Weight Predictor (MWP) framework that estimates waste weight by combining RGB images with physics-informed metadata, including object dimensions, camera distance, and camera height. We also introduce Waste-Weight-10K, a real-world dataset containing 10,421 synchronized image-metadata collected from logistics and recycling sites. The dataset covers 11 waste categories and a wide weight range from 3.5 to 3,450 kg. Our model uses a Vision Transformer for visual features and a dedicated metadata encoder for geometric and category information, combining them with Stacked Mutual Attention Fusion that allows visual and physical cues guide each other. This helps the model manage perspective effects and link objects to material properties. To ensure stable performance across the wide weight range, we train the model using Mean Squared Logarithmic Error. On the test set, the proposed method achieves 88.06 kg Mean Absolute Error (MAE), 6.39% Mean Absolute Percentage Error (MAPE), and an R2 coefficient of 0.9548. The model shows strong accuracy for light objects in the 0-100 kg range with 2.38 kg MAE and 3.1% MAPE, maintaining reliable performance for heavy waste in the 1000-2000 kg range with 11.1% MAPE. Finally, we incorporate a physically grounded explanation module using Shapley Additive Explanations (SHAP) and a large language model to provide clear, human-readable explanations for each prediction.