HydroVision: Predicting Optically Active Parameters in Surface Water Using Computer Vision

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high cost and reliance on multispectral instrumentation in real-time monitoring of optically active surface water parameters—chlorophyll-a, CDOM, phycocyanin, suspended sediment, and turbidity. We propose HydroVision, a lightweight, deployable deep learning framework trained on a large-scale, seasonally diverse RGB image dataset. Through systematic evaluation of five architectures—VGG-16, ResNet-50, MobileNetV2, DenseNet-121, and Vision Transformer (ViT)—and transfer learning, DenseNet-121 is identified as the optimal backbone. HydroVision achieves, for the first time, simultaneous non-contact retrieval of six water quality parameters from a single RGB image, attaining an R² of 0.89 for CDOM estimation—substantially outperforming conventional empirical models. The framework enables low-cost, high-temporal-resolution monitoring, establishing a scalable vision-based paradigm for early pollution detection and rapid regulatory response.

Technology Category

Application Category

📝 Abstract
Ongoing advancements in computer vision, particularly in pattern recognition and scene classification, have enabled new applications in environmental monitoring. Deep learning now offers non-contact methods for assessing water quality and detecting contamination, both critical for disaster response and public health protection. This work introduces HydroVision, a deep learning-based scene classification framework that estimates optically active water quality parameters including Chlorophyll-Alpha, Chlorophylls, Colored Dissolved Organic Matter (CDOM), Phycocyanins, Suspended Sediments, and Turbidity from standard Red-Green-Blue (RGB) images of surface water. HydroVision supports early detection of contamination trends and strengthens monitoring by regulatory agencies during external environmental stressors, industrial activities, and force majeure events. The model is trained on more than 500,000 seasonally varied images collected from the United States Geological Survey Hydrologic Imagery Visualization and Information System between 2022 and 2024. This approach leverages widely available RGB imagery as a scalable, cost-effective alternative to traditional multispectral and hyperspectral remote sensing. Four state-of-the-art convolutional neural networks (VGG-16, ResNet50, MobileNetV2, DenseNet121) and a Vision Transformer are evaluated through transfer learning to identify the best-performing architecture. DenseNet121 achieves the highest validation performance, with an R2 score of 0.89 in predicting CDOM, demonstrating the framework's promise for real-world water quality monitoring across diverse conditions. While the current model is optimized for well-lit imagery, future work will focus on improving robustness under low-light and obstructed scenarios to expand its operational utility.
Problem

Research questions and friction points this paper is trying to address.

Estimating water quality parameters from RGB images
Detecting contamination trends using computer vision
Providing cost-effective alternative to traditional sensing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning framework for water quality parameters
Uses RGB images with convolutional neural networks
Leverages transfer learning for best architecture performance
🔎 Similar Papers
No similar papers found.
S
Shubham Laxmikant Deshmukh
Department of Computer Science, Virginia Tech, Arlington, VA, USA
M
Matthew Wilchek
Department of Computer Science, Virginia Tech, Arlington, VA, USA
Feras A. Batarseh
Feras A. Batarseh
Virginia Tech
AI AssuranceAI for Agricultural PolicyCyberbiosecurityIntelligent Water SystemsContext and Causality