Automatic Identification of Parallelizable Loops Using Transformer-Based Source Code Representations

2026-03-31Software Engineering

Software EngineeringArtificial Intelligence
AI summary

The authors tackle the difficult problem of figuring out which parts of computer code can be run simultaneously to speed things up on multi-core processors. They use a special kind of AI model called DistilBERT that looks at code like language, without needing extra manual rules, to tell if loops are safe to run in parallel. Testing on both made-up and real code shows their method is very accurate and reliable. Their work suggests that lightweight Transformer models can help automatically find loops that are good candidates for parallel execution.

automatic parallelizationmulti-core architecturesloop parallelismstatic analysisdependence analysisTransformer modelsDistilBERTsource code tokenizationsyntactic patternssemantic patterns
Authors
Izavan dos S. Correia, Henrique C. T. Santos, Tiago A. E. Ferreira
Abstract
Automatic parallelization remains a challenging problem in software engineering, particularly in identifying code regions where loops can be safely executed in parallel on modern multi-core architectures. Traditional static analysis techniques, such as dependence analysis and polyhedral models, often struggle with irregular or dynamically structured code. In this work, we propose a Transformer-based approach to classify the parallelization potential of source code, focusing on distinguishing independent (parallelizable) loops from undefined ones. We adopt DistilBERT to process source code sequences using subword tokenization, enabling the model to capture contextual syntactic and semantic patterns without handcrafted features. The approach is evaluated on a balanced dataset combining synthetically generated loops and manually annotated real-world code, using 10-fold cross-validation and multiple performance metrics. Results show consistently high performance, with mean accuracy above 99\% and low false positive rates, demonstrating robustness and reliability. Compared to prior token-based methods, the proposed approach simplifies preprocessing while improving generalization and maintaining computational efficiency. These findings highlight the potential of lightweight Transformer models for practical identification of parallelization opportunities at the loop level.