TypePilot: Leveraging the Scala Type System for Secure LLM-generated Code

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate code containing latent security vulnerabilities, posing severe risks in high-assurance settings. This paper introduces TypePilot—a novel intelligent agent framework that deeply integrates static type systems into the LLM code generation pipeline. Grounded in Scala’s expressive type system and augmented with the Stainless formal verification tool, TypePilot establishes a type-driven, feedback-based generate-and-correct loop. Its core innovation lies in the first realization of *co-verification*: jointly enforcing type constraints and LLM generation, replacing heuristic prompt engineering with type-guided agent decisions to intrinsically mitigate injection and input validation vulnerabilities. Empirical evaluation across multiple state-of-the-art LLMs demonstrates that TypePilot significantly reduces security defect rates compared to both naive generation and safety-aware prompting baselines. Crucially, it enforces formal safety specifications, substantially improving correctness and trustworthiness of generated code for safety-critical systems.

Technology Category

Application Category

📝 Abstract
Large language Models (LLMs) have shown remarkable proficiency in code generation tasks across various programming languages. However, their outputs often contain subtle but critical vulnerabilities, posing significant risks when deployed in security-sensitive or mission-critical systems. This paper introduces TypePilot, an agentic AI framework designed to enhance the security and robustness of LLM-generated code by leveraging strongly typed and verifiable languages, using Scala as a representative example. We evaluate the effectiveness of our approach in two settings: formal verification with the Stainless framework and general-purpose secure code generation. Our experiments with leading open-source LLMs reveal that while direct code generation often fails to enforce safety constraints, just as naive prompting for more secure code, our type-focused agentic pipeline substantially mitigates input validation and injection vulnerabilities. The results demonstrate the potential of structured, type-guided LLM workflows to improve the SotA of the trustworthiness of automated code generation in high-assurance domains.
Problem

Research questions and friction points this paper is trying to address.

Enhancing security of LLM-generated code using type systems
Mitigating vulnerabilities in AI-generated code through verification
Improving trustworthiness of automated code generation in critical systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Scala type system for secure code
Uses agentic AI framework for robustness
Applies type-focused pipeline to mitigate vulnerabilities
A
Alexander Sternfeld
Institute of Entrepreneurship & Management, HES-SO
Andrei Kucharavy
Andrei Kucharavy
Assistant Professor, HES-SO Valais-Wallis
Machine LearningEvolutionDistributed ComputationComputational Biology
L
Ljiljana Dolamic
Cyber-Defence Campus, armasuisse, Science and Technology