🤖 AI Summary
This work investigates the zero-shot capability of lightweight local large language models (LLMs) to generate correct Python code for LoRaWAN engineering tasks—specifically, optimal UAV deployment and received power computation.
Method: We systematically evaluate 16 LLMs—including Phi-4, LLaMA-3.3, GPT-4, and DeepSeek-V3—using zero-shot prompting, automated code extraction, execution-based validation, and multi-dimensional scoring (0–5) across correctness, robustness, and domain fidelity.
Contribution/Results: Our study provides the first empirical evidence that domain adaptability outweighs model scale in wireless communication code generation. Phi-4 and LLaMA-3.3 achieve accuracy comparable to GPT-4 and DeepSeek-V3 (error rate <15%) and superior robustness to most mid- and small-scale models. We demonstrate the practical feasibility of lightweight LLMs for engineering code synthesis in resource-constrained and edge-deployed scenarios, establishing a new paradigm for edge intelligence in low-resource wireless systems.
📝 Abstract
This paper investigates the performance of 16 Large Language Models (LLMs) in automating LoRaWAN-related engineering tasks involving optimal placement of drones and received power calculation under progressively complex zero-shot, natural language prompts. The primary research question is whether lightweight, locally executed LLMs can generate correct Python code for these tasks. To assess this, we compared locally run models against state-of-the-art alternatives, such as GPT-4 and DeepSeek-V3, which served as reference points. By extracting and executing the Python functions generated by each model, we evaluated their outputs on a zero-to-five scale. Results show that while DeepSeek-V3 and GPT-4 consistently provided accurate solutions, certain smaller models-particularly Phi-4 and LLaMA-3.3-also demonstrated strong performance, underscoring the viability of lightweight alternatives. Other models exhibited errors stemming from incomplete understanding or syntactic issues. These findings illustrate the potential of LLM-based approaches for specialized engineering applications while highlighting the need for careful model selection, rigorous prompt design, and targeted domain fine-tuning to achieve reliable outcomes.