LLM-Supported Natural Language to Bash Translation

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NL2SH (natural language to Bash command) evaluation is severely hindered by low-quality test data and unreliable heuristic-based functional equivalence judgments. To address these limitations, this work proposes: (1) the first functional equivalence assessment framework integrating command execution verification with LLM-powered semantic judgment, achieving 95% accuracy; (2) the largest human-annotated NL2SH benchmark to date—comprising 600 high-quality test instances and 40,939 training samples; and (3) a model enhancement strategy combining instruction tuning, in-context learning, constrained decoding, and weight-intrinsic learning to improve generalization. Experiments demonstrate up to a 32% absolute improvement in NL2SH translation accuracy and a 16% increase in evaluation confidence over state-of-the-art methods. All data and code are publicly released.

Technology Category

Application Category

📝 Abstract
The Bourne-Again Shell (Bash) command-line interface for Linux systems has complex syntax and requires extensive specialized knowledge. Using the natural language to Bash command (NL2SH) translation capabilities of large language models (LLMs) for command composition circumvents these issues. However, the NL2SH performance of LLMs is difficult to assess due to inaccurate test data and unreliable heuristics for determining the functional equivalence of Bash commands. We present a manually verified test dataset of 600 instruction-command pairs and a training dataset of 40,939 pairs, increasing the size of previous datasets by 441% and 135%, respectively. Further, we present a novel functional equivalence heuristic that combines command execution with LLM evaluation of command outputs. Our heuristic can determine the functional equivalence of two Bash commands with 95% confidence, a 16% increase over previous heuristics. Evaluation of popular LLMs using our test dataset and heuristic demonstrates that parsing, in-context learning, in-weight learning, and constrained decoding can improve NL2SH accuracy by up to 32%. Our findings emphasize the importance of dataset quality, execution-based evaluation and translation method for advancing NL2SH translation. Our code is available at https://github.com/westenfelder/NL2SH
Problem

Research questions and friction points this paper is trying to address.

Improving NL2SH translation accuracy
Validating Bash command equivalence
Enhancing dataset quality for training
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based NL2SH translation
Manually verified dataset
Functional equivalence heuristic
🔎 Similar Papers
No similar papers found.
F
Finnian Westenfelder
ALFA Group MIT-CSAIL, Draper Scholar
Erik Hemberg
Erik Hemberg
Research Scientist, MIT CSAIL
Artificial IntelligenceMachine LearningEvolutionary Computation
M
Miguel Tulla
ALFA Group MIT-CSAIL
Stephen Moskal
Stephen Moskal
Massachusetts Institute Of Technology
Computer Engineering
U
Una-May O’Reilly
ALFA Group MIT-CSAIL
S
Silviu Chiricescu
Charles Stark Draper Laboratory