🤖 AI Summary
This paper studies how to sustain incentive compatibility in dynamic environments where agents learn underlying states from allocation outcomes, under repeated use of a fixed mechanism. We propose “calibrated mechanism design”—a novel framework that decouples information disclosure from allocation by splitting the mechanism into two stages: first, a signal structure reveals state information; second, a state-independent static allocation rule is applied. We establish the theoretical foundation of this framework and prove that, in the single-agent case, its implementable set coincides exactly with the set of all incentive-compatible mechanisms. We show that full transparency is optimal under private values, while standard surplus extraction fails. The framework provides a rigorous microfoundation for infinite-horizon repeated interactions. By integrating information design, Bayesian mechanism design, and convex optimization, we derive necessary and sufficient conditions characterizing calibrated mechanisms. Finally, we demonstrate that history-dependent mechanisms expand feasibility only in non-quasilinear settings.
📝 Abstract
We study mechanism design when a designer repeatedly uses a fixed mechanism to interact with strategic agents who learn from observing their allocations. We introduce a static framework, calibrated mechanism design, requiring mechanisms to remain incentive compatible given the information they reveal about an underlying state through repeated use. In single-agent settings, we prove implementable outcomes correspond to two-stage mechanisms: the designer discloses information about the state, then commits to a state-independent allocation rule. This yields a tractable procedure to characterize calibrated mechanisms, combining information design and mechanism design. In private values environments, full transparency is optimal and correlation-based surplus extraction fails. We provide a microfoundation by showing calibrated mechanisms characterize exactly what is implementable when an infinitely patient agent repeatedly interacts with the same mechanism. Dynamic mechanisms that condition on histories expand implementable outcomes only by weakening incentive compatibility and individual rationality--a distinction that vanishes in transferable utility settings.