Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessWhy Software Engineers Burn Out Differently And What To Do About ItDEV Community512,000 Lines of Claude Code Leaked Through a Single .npmignore MistakeDEV CommunityStop Wasting Tokens on npm Install NoiseDEV CommunityProgramming Logic: The First Step to Mastering Any LanguageDEV CommunityThe $10 Billion Trust Data Market That AI Companies Can't SeeDEV CommunityAI company insiders can bias models for election interferenceLessWrong AIMiniScript Weekly News — Apr 1, 2026DEV CommunityBuilding a Real-Time Dota 2 Draft Prediction System with Machine LearningDEV Community🚀 Build a Full-Stack Python Web App (No JS Framework Needed)DEV CommunityGoogle increases the storage of its $19.99/month AI Pro subscription plan to 5TB, up from 2TB, at no additional cost (Abner Li/9to5Google)TechmemeI open sourced a production MLOps pipeline. Here is what it took to get it to PyPI and Hugging Face in one day.DEV CommunityBuilding a Future in Artificial Intelligence: Complete Guide to AI-900 and AI-102 Certifications - North Penn NowGoogle News: Machine LearningBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessWhy Software Engineers Burn Out Differently And What To Do About ItDEV Community512,000 Lines of Claude Code Leaked Through a Single .npmignore MistakeDEV CommunityStop Wasting Tokens on npm Install NoiseDEV CommunityProgramming Logic: The First Step to Mastering Any LanguageDEV CommunityThe $10 Billion Trust Data Market That AI Companies Can't SeeDEV CommunityAI company insiders can bias models for election interferenceLessWrong AIMiniScript Weekly News — Apr 1, 2026DEV CommunityBuilding a Real-Time Dota 2 Draft Prediction System with Machine LearningDEV Community🚀 Build a Full-Stack Python Web App (No JS Framework Needed)DEV CommunityGoogle increases the storage of its $19.99/month AI Pro subscription plan to 5TB, up from 2TB, at no additional cost (Abner Li/9to5Google)TechmemeI open sourced a production MLOps pipeline. Here is what it took to get it to PyPI and Hugging Face in one day.DEV CommunityBuilding a Future in Artificial Intelligence: Complete Guide to AI-900 and AI-102 Certifications - North Penn NowGoogle News: Machine Learning

Automatic Identification of Parallelizable Loops Using Transformer-Based Source Code Representations

arXiv cs.SEby Izavan dos S. Correia, Henrique C. T. Santos, Tiago A. E. FerreiraApril 1, 20261 min read0 views
Source Quiz

arXiv:2603.30040v1 Announce Type: new Abstract: Automatic parallelization remains a challenging problem in software engineering, particularly in identifying code regions where loops can be safely executed in parallel on modern multi-core architectures. Traditional static analysis techniques, such as dependence analysis and polyhedral models, often struggle with irregular or dynamically structured code. In this work, we propose a Transformer-based approach to classify the parallelization potential of source code, focusing on distinguishing independent (parallelizable) loops from undefined ones. We adopt DistilBERT to process source code sequences using subword tokenization, enabling the model to capture contextual syntactic and semantic patterns without handcrafted features. The approach is

View PDF HTML (experimental)

Abstract:Automatic parallelization remains a challenging problem in software engineering, particularly in identifying code regions where loops can be safely executed in parallel on modern multi-core architectures. Traditional static analysis techniques, such as dependence analysis and polyhedral models, often struggle with irregular or dynamically structured code. In this work, we propose a Transformer-based approach to classify the parallelization potential of source code, focusing on distinguishing independent (parallelizable) loops from undefined ones. We adopt DistilBERT to process source code sequences using subword tokenization, enabling the model to capture contextual syntactic and semantic patterns without handcrafted features. The approach is evaluated on a balanced dataset combining synthetically generated loops and manually annotated real-world code, using 10-fold cross-validation and multiple performance metrics. Results show consistently high performance, with mean accuracy above 99% and low false positive rates, demonstrating robustness and reliability. Compared to prior token-based methods, the proposed approach simplifies preprocessing while improving generalization and maintaining computational efficiency. These findings highlight the potential of lightweight Transformer models for practical identification of parallelization opportunities at the loop level.

Comments: 28 pages, 12 figures

Subjects:

Software Engineering (cs.SE); Artificial Intelligence (cs.AI)

Cite as: arXiv:2603.30040 [cs.SE]

(or arXiv:2603.30040v1 [cs.SE] for this version)

https://doi.org/10.48550/arXiv.2603.30040

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Izavan Dos Santos Correia [view email] [v1] Tue, 31 Mar 2026 17:54:27 UTC (1,440 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modeltransformerannounce

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Automatic I…modeltransformerannouncefeatureanalysisarxivarXiv cs.SE

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 202 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models