Evaluating Large Language Models for Workload Mapping and Scheduling in Heterogeneous HPC Systems

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the constraint optimization problem of task mapping and scheduling on heterogeneous high-performance computing (HPC) systems, formulated from natural language descriptions—a novel challenge for large language models (LLMs). Method: We systematically evaluate 21 state-of-the-art LLMs using natural language prompt engineering, explicit constraint modeling, manually derived optimal solution benchmarks, executable code validation, and chain-of-reasoning analysis. Contribution/Results: Three models achieve the theoretical optimum; twelve produce solutions within 5% deviation from optimality; all generate feasible schedules; and eighteen exhibit logically coherent reasoning traces. The results demonstrate that leading LLMs can produce interpretable, near-optimal scheduling policies, establishing their technical feasibility as human-AI collaborative decision-making tools for system-level HPC optimization—filling a critical gap in empirical LLM evaluation for infrastructure-aware scheduling tasks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly explored for their reasoning capabilities, yet their ability to perform structured, constraint-based optimization from natural language remains insufficiently understood. This study evaluates twenty-one publicly available LLMs on a representative heterogeneous high-performance computing (HPC) workload mapping and scheduling problem. Each model received the same textual description of system nodes, task requirements, and scheduling constraints, and was required to assign tasks to nodes, compute the total makespan, and explain its reasoning. A manually derived analytical optimum of nine hours and twenty seconds served as the ground truth reference. Three models exactly reproduced the analytical optimum while satisfying all constraints, twelve achieved near-optimal results within two minutes of the reference, and six produced suboptimal schedules with arithmetic or dependency errors. All models generated feasible task-to-node mappings, though only about half maintained strict constraint adherence. Nineteen models produced partially executable verification code, and eighteen provided coherent step-by-step reasoning, demonstrating strong interpretability even when logical errors occurred. Overall, the results define the current capability boundary of LLM reasoning in combinatorial optimization: leading models can reconstruct optimal schedules directly from natural language, but most still struggle with precise timing, data transfer arithmetic, and dependency enforcement. These findings highlight the potential of LLMs as explainable co-pilots for optimization and decision-support tasks rather than autonomous solvers.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' capability for structured optimization from natural language descriptions
Assessing LLMs on HPC workload mapping and scheduling with multiple constraints
Determining LLMs' boundary in combinatorial optimization and constraint adherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs perform workload mapping from natural language descriptions
Models generate executable verification code for scheduling solutions
LLMs provide explainable reasoning for optimization decisions
🔎 Similar Papers
No similar papers found.
Aasish Kumar Sharma
Aasish Kumar Sharma
University of Göttingen
High Performance Computing
J
Julian Kunkel
Faculty of Mathematics and Computer Science, Georg-August-Universität Göttingen, Germany