🤖 AI Summary
To address the challenge of text provenance attribution arising from LLM misuse, this paper introduces fine-grained text origin identification—distinguishing among human-written, LLM-generated, LLM-polished, and machine-translated texts. We propose HERO, a hierarchical, length-robust framework featuring: (i) a subcategory-guided mechanism to enhance fine-grained discriminability; (ii) a length-adaptive ensemble of expert models to improve generalization across varying text lengths; and (iii) multi-model prediction coupled with fine-grained supervised training. Experiments across five mainstream LLMs and six domain-specific datasets demonstrate that HERO achieves an average mAP 2.5–3.0 percentage points higher than the state-of-the-art methods. By moving beyond conventional binary detection paradigms, HERO delivers both high accuracy and model interpretability, offering a principled technical foundation for responsible LLM content governance.
📝 Abstract
Large Language Model (LLMs) can be used to write or modify documents, presenting a challenge for understanding the intent behind their use. For example, benign uses may involve using LLM on a human-written document to improve its grammar or to translate it into another language. However, a document entirely produced by a LLM may be more likely to be used to spread misinformation than simple translation (eg, from use by malicious actors or simply by hallucinating). Prior works in Machine Generated Text (MGT) detection mostly focus on simply identifying whether a document was human or machine written, ignoring these fine-grained uses. In this paper, we introduce a HiErarchical, length-RObust machine-influenced text detector (HERO), which learns to separate text samples of varying lengths from four primary types: human-written, machine-generated, machine-polished, and machine-translated. HERO accomplishes this by combining predictions from length-specialist models that have been trained with Subcategory Guidance. Specifically, for categories that are easily confused (eg, different source languages), our Subcategory Guidance module encourages separation of the fine-grained categories, boosting performance. Extensive experiments across five LLMs and six domains demonstrate the benefits of our HERO, outperforming the state-of-the-art by 2.5-3 mAP on average.