🤖 AI Summary
AI code completion research is hindered by the industrial sector’s monopoly on real-world interaction data, leaving academia without reproducible, scalable experimental platforms. Method: We introduce the first open-source code completion platform designed specifically for human–computer interaction (HCI) research, supporting JetBrains IDEs and integrating both inline code completion and context-aware conversational assistants. The platform employs a modular client–server architecture with a transparent, fine-grained telemetry framework that synchronously captures user actions and multimodal contextual signals—including source code, cursor position, and chat history. Results: It achieves end-to-end latency of 200 ms—meeting industrial-grade performance—and has been validated through expert evaluation and an 8-participant user study, confirming both practical utility and research suitability. Our core contribution is breaking down data barriers by providing a standardized, reproducible infrastructure to enable empirical investigation of human–AI collaboration mechanisms in code completion.
📝 Abstract
The adoption of AI-powered code completion tools in software development has increased substantially, yet the user interaction data produced by these systems remain proprietary within large corporations. This creates a barrier for the academic community, as researchers must often develop dedicated platforms to conduct studies on human--AI interaction, making reproducible research and large-scale data analysis impractical. In this work, we introduce Code4MeV2, a research-oriented, open-source code completion plugin for JetBrains IDEs, as a solution to this limitation. Code4MeV2 is designed using a client--server architecture and features inline code completion and a context-aware chat assistant. Its core contribution is a modular and transparent data collection framework that gives researchers fine-grained control over telemetry and context gathering. Code4MeV2 achieves industry-comparable performance in terms of code completion, with an average latency of 200~ms. We assess our tool through a combination of an expert evaluation and a user study with eight participants. Feedback from both researchers and daily users highlights its informativeness and usefulness. We invite the community to adopt and contribute to this tool. More information about the tool can be found at https://app.code4me.me.