🤖 AI Summary
This paper addresses the challenge of quantitatively assessing software covert information leakage, which remains difficult to measure accurately. We propose a scalable dynamic quantification method capable of precisely modeling leakage magnitude under modern security mechanisms—including ASLR and PAC. Our key contributions are threefold: (1) we derive, for the first time, a closed-form conditional mutual information formula tailored for dynamic execution; (2) we design three complementary metrics that collectively characterize risk levels across diverse threat models; and (3) we build an integrated analysis framework combining coverage-guided fuzzing, path-sensitive symbolic execution, and leakage-aware mutation strategies. Evaluated on 14 programs—including 8 real-world CVEs—our approach achieves 100% detection of known leaks, scales to binaries comprising up to 278 KLOC, and maintains estimation error below 12%.
📝 Abstract
This paper presents a scalable, practical approach to quantifying information leaks in software; these errors are often overlooked and downplayed, but can seriously compromise security mechanisms such as address space layout randomisation (ASLR) and Pointer Authentication (PAC). We introduce approaches for three different metrics to estimate the size of information leaks, including a new derivation for the calculation of conditional mutual information. Together, these metrics can inform of the relative safety of the target program against different threat models and provide useful details for finding the source of any leaks. We provide an implementation of a fuzzer, NIFuzz, which is capable of dynamically computing these metrics with little overhead and has several strategies to optimise for the detection and quantification of information leaks. We evaluate NIFuzz on a set of 14 programs -- including 8 real-world CVEs and ranging up to 278k lines of code in size -- where we find that it is capable of detecting and providing good estimates for all of the known information leaks.