🤖 AI Summary
This study addresses the computational intractability of evaluating nonlinearity—a fundamental cryptographic property of Boolean functions—by proposing an end-to-end deep neural network regression framework: the truth table serves as input, and nonlinearity is predicted as a continuous scalar output. An encoder-based architecture achieves over 95% prediction accuracy on 4- to 5-variable Boolean functions, constituting the first systematic demonstration that data-driven learning can viably substitute traditional combinatorial algorithms for this task. The primary contribution is the establishment of a novel, empirically validated paradigm for approximating cryptographic combinatorial properties via machine learning, exhibiting both high accuracy and efficiency in low-dimensional settings. However, experimental results also reveal a pronounced degradation in generalization performance beyond five variables, indicating limited scalability to 6-variable and higher-dimensional functions.
📝 Abstract
This paper investigates the learnability of the nonlinearity property of Boolean functions using neural networks. We train encoder style deep neural networks to learn to predict the nonlinearity of Boolean functions from examples of functions in the form of a truth table and their corresponding nonlinearity values. We report empirical results to show that deep neural networks are able to learn to predict the property for functions in 4 and 5 variables with an accuracy above 95%. While these results are positive and a disciplined analysis is being presented for the first time in this regard, we should also underline the statutory warning that it seems quite challenging to extend the idea to higher number of variables, and it is also not clear whether one can get advantage in terms of time and space complexity over the existing combinatorial algorithms.