Language Models to Support Multi-Label Classification of Industrial Data

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the multi-label classification of industrial requirement documents, tackling two key challenges: scarce labeled data and a complex hierarchical category structure spanning six abstraction levels. We propose a hierarchy-aware label distance metric $D_n$, enabling an embedding-similarity-based zero-shot classification framework. We systematically evaluate nine lightweight (≤3B) and five large-scale (≤70B) language models. Results show that medium- and small-scale models—particularly T5-xl and BERT-base—outperform larger models: T5-xl achieves the highest F$_eta$=0.78 ($D_n$=0.04) across five of six label spaces, while BERT-base attains F$_eta$=0.83 in one space. The metric $D_n$ effectively captures hierarchical consistency and informs model selection. Our core contribution is the empirical validation of lightweight models’ superiority in industrial zero-shot multi-label classification, and the first integration of a hierarchy-aware distance metric into the evaluation methodology for this task.

Technology Category

Application Category

📝 Abstract
Multi-label requirements classification is a challenging task, especially when dealing with numerous classes at varying levels of abstraction. The difficulties increases when a limited number of requirements is available to train a supervised classifier. Zero-shot learning (ZSL) does not require training data and can potentially address this problem. This paper investigates the performance of zero-shot classifiers (ZSCs) on a multi-label industrial dataset. We focuse on classifying requirements according to a taxonomy designed to support requirements tracing. We compare multiple variants of ZSCs using different embeddings, including 9 language models (LMs) with a reduced number of parameters (up to 3B), e.g., BERT, and 5 large LMs (LLMs) with a large number of parameters (up to 70B), e.g., Llama. Our ground truth includes 377 requirements and 1968 labels from 6 output spaces. For the evaluation, we adopt traditional metrics, i.e., precision, recall, F1, and $F_eta$, as well as a novel label distance metric Dn. This aims to better capture the classification's hierarchical nature and provides a more nuanced evaluation of how far the results are from the ground truth. 1) The top-performing model on 5 out of 6 output spaces is T5-xl, with maximum $F_eta$ = 0.78 and Dn = 0.04, while BERT base outperformed the other models in one case, with maximum $F_eta$ = 0.83 and Dn = 0.04. 2) LMs with smaller parameter size produce the best classification results compared to LLMs. Thus, addressing the problem in practice is feasible as limited computing power is needed. 3) The model architecture (autoencoding, autoregression, and sentence-to-sentence) significantly affects the classifier's performance. We conclude that using ZSL for multi-label requirements classification offers promising results. We also present a novel metric that can be used to select the top-performing model for this problem
Problem

Research questions and friction points this paper is trying to address.

Evaluating zero-shot classifiers for multi-label industrial data classification
Comparing performance of small vs. large language models on limited training data
Introducing a novel metric for hierarchical multi-label classification evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot learning for multi-label classification
Comparison of 9 small and 5 large language models
Novel hierarchical metric Dn evaluates classification accuracy
🔎 Similar Papers
No similar papers found.