🤖 AI Summary
This work investigates whether dynamic binary modification (DBM) can enhance the performance of ahead-of-time (AOT) compilers for dynamic languages—specifically to narrow the execution-speed gap between AOT and just-in-time (JIT) compilation. Method: We systematically integrate DBM into the JavaScript AOT compiler Hopc, focusing on runtime optimization of inline caches (ICs), and adapt the approach to the x86_64 architecture. Contribution/Results: Contrary to expectations, DBM-driven IC optimization yields no performance improvement. Our experiments reveal that modern processors efficiently hide memory-access latency, challenging the conventional intuition that reducing memory accesses necessarily accelerates execution. This negative result provides critical empirical insight for dynamic-language runtime design, underscoring the necessity of reevaluating classical optimization strategies—particularly those predicated on memory-access reduction—in light of contemporary hardware capabilities.
📝 Abstract
Context: Just-in-Time (JIT) compilers are able to specialize the code they generate according to a continuous profiling of the running programs. This gives them an advantage when compared to Ahead-of-Time (AoT) compilers that must choose the code to generate once for all. Inquiry: Is it possible to improve the performance of AoT compilers by adding Dynamic Binary Modification (DBM) to the executions? Approach: We added to the Hopc AoT JavaScript compiler a new optimization based on DBM to the inline cache (IC), a classical optimization dynamic languages use to implement object property accesses efficiently. Knowledge: Reducing the number of memory accesses as the new optimization does, does not shorten execution times on contemporary architectures. Grounding: The DBM optimization we have implemented is fully operational on x86_64 architectures. We have conducted several experiments to evaluate its impact on performance and to study the reasons of the lack of acceleration. Importance: The (negative) result we present in this paper sheds new light on the best strategy to be used to implement dynamic languages. It tells that the old days were removing instructions or removing memory reads always yielded to speed up is over. Nowadays, implementing sophisticated compiler optimizations is only worth the effort if the processor is not able by itself to accelerate the code. This result applies to AoT compilers as well as JIT compilers.