š¤ AI Summary
This paper addresses the challenge of accurately identifying parallelization opportunities in complex loops using static analysis. To tackle this, we propose a deep learningābased code parallelism prediction framework. Methodologically, we design a genetic algorithm to automatically generate diverse loop code samplesācovering both clearly parallelizable cases and those with ambiguous data dependenciesāand construct a manually annotated training dataset. We then employ both deep neural networks (DNNs) and convolutional neural networks (CNNs) to model and classify tokenized code sequences. Experimental results show that CNNs achieve marginally higher average accuracy, while both models demonstrate robust performance. Our key contributions are threefold: (1) the first integration of generative genetic algorithms with deep learning for parallelism prediction; (2) effective mitigation of data scarcity and ambiguity in dependency analysis; and (3) empirical validation that training data diversity critically enhances model generalizationāestablishing a novel paradigm for automated parallel optimization.
š Abstract
This study proposes a deep learning-based approach for discovering loops in programming code according to their potential for parallelization. Two genetic algorithm-based code generators were developed to produce two distinct types of code: (i) independent loops, which are parallelizable, and (ii) ambiguous loops, whose dependencies are unclear, making them impossible to define if the loop is parallelizable or not. The generated code snippets were tokenized and preprocessed to ensure a robust dataset. Two deep learning models - a Deep Neural Network (DNN) and a Convolutional Neural Network (CNN) - were implemented to perform the classification. Based on 30 independent runs, a robust statistical analysis was employed to verify the expected performance of both models, DNN and CNN. The CNN showed a slightly higher mean performance, but the two models had a similar variability. Experiments with varying dataset sizes highlighted the importance of data diversity for model performance. These results demonstrate the feasibility of using deep learning to automate the identification of parallelizable structures in code, offering a promising tool for software optimization and performance improvement.