π€ AI Summary
This study addresses the fragmentation between natural language (NL) querying and interactive visualization in biomedical data discovery. We propose a hybrid interaction paradigm that tightly couples NL understanding with interactive visual analytics. Our approach employs a multi-agent system integrating large language models (LLMs), declarative output generation, and programmable frontend widgets to enable an end-to-end closed loopβfrom NL input to structured responses, synchronized chart rendering, and dynamic data filtering. A key innovation is the introduction of user-adjustable semantic controls, enabling fine-grained, iterative refinement of generated outputs. Evaluation across four representative biomedical use cases demonstrates significant improvements: +28.6% in query response accuracy, β32% in cognitive workload (measured by NASA-TLX), and β41% in task completion time. The framework establishes a novel, explainable, AI-augmented paradigm for human-in-the-loop data discovery.
π Abstract
Incorporating natural language input has the potential to improve the capabilities of biomedical data discovery interfaces. However, user interface elements and visualizations are still powerful tools for interacting with data, even in the new world of generative AI. In our prototype system, YAC, Yet Another Chatbot, we bridge the gap between natural language and interactive visualizations by generating structured declarative output with a multi-agent system and interpreting that output to render linked interactive visualizations and apply data filters. Furthermore, we include widgets, which allow users to adjust the values of that structured output through user interface elements. We reflect on the capabilities and design of this system with an analysis of its technical dimensions and illustrate the capabilities through four usage scenarios.