🤖 AI Summary
This work proposes an interpretable multi-task learning framework for parasite detection that addresses the limitations of current deep learning approaches, which often lack explainability regarding critical morphological features. By integrating fine-grained morphological supervision—such as shape, curvature, number of visible points, presence of flagella, and developmental stage—directly into the detection pipeline, the method jointly localizes parasites and predicts structured, biologically meaningful attributes. Evaluated on a newly curated clinical dataset encompassing three parasite species, the approach not only enhances detection performance but also produces explanations that align with medical reasoning, surpassing post-hoc interpretability techniques like saliency maps.
📝 Abstract
Parasitic infections remain a pressing global health challenge, particularly in low-resource settings where diagnosis still depends on labor-intensive manual inspection of blood smears and the availability of expert domain knowledge. While deep learning models have shown strong performance in automating parasite detection, their clinical usefulness is constrained by limited interpretability. Existing explainability methods are largely restricted to visual heatmaps or attention maps, which highlight regions of interest but fail to capture the morphological traits that clinicians rely on for diagnosis. In this work, we present MorphXAI, an explainable framework that unifies parasite detection with fine-grained morphological analysis. MorphXAI integrates morphological supervision directly into the prediction pipeline, enabling the model to localize parasites while simultaneously characterizing clinically relevant attributes such as shape, curvature, visible dot count, flagellum presence, and developmental stage. To support this task, we curate a clinician-annotated dataset of three parasite species (Leishmania, Trypanosoma brucei, and Trypanosoma cruzi) with detailed morphological labels, establishing a new benchmark for interpretable parasite analysis. Experimental results show that MorphXAI not only improves detection performance over the baseline but also provides structured, biologically meaningful explanations.