🤖 AI Summary
Large language models (LLMs) often produce generic, suboptimal outputs—such as templated emails—when no explicit objective is provided, limiting their ability to satisfy personalized user needs. To address this, we propose the “Prompt Objective Inference and Dynamic Optimization” framework, which passively infers users’ transient intentions in real time from implicit interaction behaviors and dynamically regulates both the LLM’s generation process and output evaluation accordingly. Our approach integrates behavioral modeling, goal inference, controllable generation, and human-in-the-loop automated assessment, enabling zero-shot generation of personalized tools. Experiments demonstrate that our method achieves 66%–86% user preference win rates over strong baselines across multiple tasks. Crucially, it generates functionally and semantically distinct domain-specific tools for each individual user, significantly enhancing response professionalism, precision, and interactive efficiency.
📝 Abstract
Large language models promise a broad set of functions, but when not given a specific objective, they default to milquetoast results such as drafting emails littered with cliches. We demonstrate that inferring the user's in-the-moment objective, then rapidly optimizing for that singular objective, enables LLMs to produce tools, interfaces, and responses that are more responsive and desired. We contribute an architecture for automatically inducing just-in-time objectives by passively observing user behavior, then steering downstream AI systems through generation and evaluation against this objective. Inducing just-in-time objectives (e.g., "Clarify the abstract's research contribution") enables automatic generation of tools, e.g., those that critique a draft based on relevant HCI methodologies, anticipate related researchers' reactions, or surface ambiguous terminology. In a series of experiments (N=14, N=205) on participants' own tasks, JIT objectives enable LLM outputs that achieve 66-86% win rates over typical LLMs, and in-person use sessions (N=17) confirm that JIT objectives produce specialized tools unique to each participant.