🤖 AI Summary
This work addresses the performance bottleneck of type normalization in dependently typed checking. We conduct the first direct, comparable empirical evaluation of syntax-directed versus type-directed normalisation-by-evaluation (NbE) strategies in a real-world system. Using a unified benchmarking platform, we quantify runtime overheads across representative dependent typing tasks and find that the type-directed approach is 1.8–3.2× slower on average, primarily due to redundant type inference and context reconstruction. A fine-grained analysis identifies the root causes of this overhead and proposes three optimization avenues: lazy type checking, context caching, and fusion of normalization phases. Experimental evaluation confirms that two of these optimizations accelerate the type-directed method by up to 40%. Our study establishes the first systematic performance analysis framework for dependent type checkers and provides empirically grounded, actionable guidance for optimizing NbE-based normalization.
📝 Abstract
A key part of any dependent type-checker is the method for checking whether two types are equal. A common claim is that syntax-directed equality is more performant, although type-directed equality is more expressive. However, this claim is difficult to make precise, since implementations choose only one or the other approach, making a direct comparison impossible. We present some work-in-progress developing a realistic platform for direct, apples-to-apples, comparison of the two approaches, quantifying how much slower type-directed equality checking is, and analyzing why and how it can be improved.