🤖 AI Summary
To address weak cross-regional generalization and constrained computational resources in edge-based wildlife monitoring, this paper proposes a geography-aware conditional-computation lightweight Vision Transformer (ViT). The method introduces, for the first time, a geographic-embedding-guided conditional subnetwork scheduling mechanism into edge ViTs, integrated with structured expert pruning and location-adaptive activation to dynamically invoke subnetworks tailored to local ecological characteristics. Trained jointly on multi-source geographically diverse datasets (iNaturalist and iWildcam), the model achieves full-model accuracy using only 30% of the original computational cost. It attains a 2.1× speedup in edge inference latency and improves mean Average Precision (mAP) by 4.7%, significantly enhancing both cross-regional recognition robustness and deployment efficiency.
📝 Abstract
Efficient on-device models have become attractive for near-sensor insight generation, of particular interest to the ecological conservation community. For this reason, deep learning researchers are proposing more approaches to develop lower compute models. However, since vision transformers are very new to the edge use case, there are still unexplored approaches, most notably conditional execution of subnetworks based on input data. In this work, we explore the training of a single species detector which uses conditional computation to bias structured sub networks in a geographically-aware manner. We propose a method for pruning the expert model per location and demonstrate conditional computation performance on two geographically distributed datasets: iNaturalist and iWildcam.