Exploring the Challenges and Opportunities of AI-assisted Codebase Generation

๐Ÿ“… 2025-08-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates real-world developer interactions with Codebase AI Assistants (CBAs) and identifies critical capability gaps. Through adversarial-balanced user studies, in-depth interviews, and comparative analysis of leading commercial CBAs, we uncover six core usage challenges and five workflow integration barriers. Results show incomplete functionality as the predominant pain point (77% of participants), with low overall satisfaction (mean rating: 2.8/5). Empirically grounded, our work reveals a structural misalignment between current CBA capabilities and authentic development needs. We propose a next-generation CBA design framework centered on context awareness, progressive integration, and task adaptation. This is the first work to establish a validated taxonomy of human-CBA collaboration challenges and a actionable design opportunity mapโ€”providing both empirical foundations and methodological guidance for enhancing the usability and practical utility of AI-powered programming assistants.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent AI code assistants have significantly improved their ability to process more complex contexts and generate entire codebases based on a textual description, compared to the popular snippet-level generation. These codebase AI assistants (CBAs) can also extend or adapt codebases, allowing users to focus on higher-level design and deployment decisions. While prior work has extensively studied the impact of snippet-level code generation, this new class of codebase generation models is relatively unexplored. Despite initial anecdotal reports of excitement about these agents, they remain less frequently adopted compared to snippet-level code assistants. To utilize CBAs better, we need to understand how developers interact with CBAs, and how and why CBAs fall short of developers' needs. In this paper, we explored these gaps through a counterbalanced user study and interview with (n = 16) students and developers working on coding tasks with CBAs. We found that participants varied the information in their prompts, like problem description (48% of prompts), required functionality (98% of prompts), code structure (48% of prompts), and their prompt writing process. Despite various strategies, the overall satisfaction score with generated codebases remained low (mean = 2.8, median = 3, on a scale of one to five). Participants mentioned functionality as the most common factor for dissatisfaction (77% of instances), alongside poor code quality (42% of instances) and communication issues (25% of instances). We delve deeper into participants' dissatisfaction to identify six underlying challenges that participants faced when using CBAs, and extracted five barriers to incorporating CBAs into their workflows. Finally, we surveyed 21 commercial CBAs to compare their capabilities with participant challenges and present design opportunities for more efficient and useful CBAs.
Problem

Research questions and friction points this paper is trying to address.

Exploring challenges in AI-assisted codebase generation adoption
Understanding developer interactions with codebase AI assistants
Identifying gaps between CBA capabilities and developer needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI generates entire codebases from text
User study explores CBA interaction gaps
Identifies challenges in CBA adoption
๐Ÿ”Ž Similar Papers
No similar papers found.
Philipp Eibl
Philipp Eibl
University of Southern California
LLMHCINLPML
Sadra Sabouri
Sadra Sabouri
University of Southern California
HCINLPLLMSE
S
Souti Chattopadhyay
Department of Computer Science, University of Southern California, Los Angeles, California