🤖 AI Summary
To address stringent data privacy and regulatory compliance requirements in the nuclear energy domain, this paper proposes a lightweight private large language model (LLM) construction methodology. Our approach leverages the *Essential CANDU* textbook corpus and employs a compact Transformer architecture, nuclear-domain-adapted tokenization, few-shot supervised fine-tuning, and single-GPU localized training—ensuring strict data isolation and end-to-end privacy preservation. To the best of our knowledge, this work presents the first full-cycle private training and deployment of a small-scale Transformer model specifically for nuclear applications. Experimental results demonstrate that the model accurately captures nuclear-specific terminology and generates technically coherent, operationally usable text, while fully satisfying cybersecurity and data confidentiality standards mandated for nuclear facilities. This study validates both the technical feasibility and practical engineering pathway for deploying domain-specialized lightweight LLMs in highly regulated critical infrastructure environments.
📝 Abstract
This paper introduces a domain-specific Large Language Model for nuclear applications, built from the publicly accessible Essential CANDU textbook. Drawing on a compact Transformer-based architecture, the model is trained on a single GPU to protect the sensitive data inherent in nuclear operations. Despite relying on a relatively small dataset, it shows encouraging signs of capturing specialized nuclear vocabulary, though the generated text sometimes lacks syntactic coherence. By focusing exclusively on nuclear content, this approach demonstrates the feasibility of in-house LLM solutions that align with rigorous cybersecurity and data confidentiality standards. Early successes in text generation underscore the model's utility for specialized tasks, while also revealing the need for richer corpora, more sophisticated preprocessing, and instruction fine-tuning to enhance domain accuracy. Future directions include extending the dataset to cover diverse nuclear subtopics, refining tokenization to reduce noise, and systematically evaluating the model's readiness for real-world applications in nuclear domain.