Probing the Design Space: Parallel Versions for Exploratory Programming

📅 2025-02-15
🏛️ The Art, Science, and Engineering of Programming
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In exploratory programming, fragmented feedback and inefficient comparison hinder iterative development; current informal practices—such as relying on memory, manual annotations, or screenshots—introduce errors and impede reproducibility. To address this, we propose Exploriants, a real-time, example-based programming extension. It introduces the novel “variant point” mechanism to automatically capture probe-style outputs during exploration and designs a domain-adaptive, parallel comparison view that transforms unstructured experimentation into a traceable, reproducible, structured iteration process. Our approach integrates example-driven programming, real-time probe-based output collection, and a configurable visualization interface for side-by-side comparison. We evaluate Exploriants across three domains—image processing, data processing, and game development—demonstrating statistically significant reductions in manual comparison errors, improved exploration efficiency, and enhanced accuracy in directionality assessment during iterative development.

Technology Category

Application Category

📝 Abstract
Exploratory programming involves open-ended tasks. To evaluate their progress on these, programmers require frequent feedback and means to tell if the feedback they observe is bringing them in the right direction. Collecting, comparing, and sharing feedback is typically done through ad-hoc means: relying on memory to compare outputs, code comments, or manual screenshots. To approach this issue, we designed Exploriants: an extension to example-based live programming. Exploriants allows programmers to place variation points. It collects outputs captured in probes and presents them in a comparison view that programmers can customize to suit their program domain. We find that the addition of variation points and the comparisons view encourages a structured approach to exploring variations of a program. We demonstrate Exploriants' capabilities and applicability in three case studies on image processing, data processing, and game development. Given Exploriants, exploratory programmers are given a straightforward means to evaluate their progress and do not have to rely on ad-hoc methods that may introduce errors.
Problem

Research questions and friction points this paper is trying to address.

Facilitates structured exploration of program variations
Provides real-time feedback for exploratory programming tasks
Replaces ad-hoc methods with systematic output comparison tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extension to example-based live programming
Introduces variation points for program exploration
Customizable comparison view for outputs
🔎 Similar Papers
No similar papers found.