Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessRightNow AI Releases AutoKernel: An Open-Source Framework that Applies an Autonomous Agent Loop to GPU Kernel Optimization for Arbitrary PyTorch ModelsMarkTechPostChinese AI rivals clash over Anthropic’s OpenClaw exit amid global token crunchSCMP Tech (Asia AI)India turns to Iran for oil and gas after 7-year hiatus, signaling limits to U.S. tiltCNBC TechnologyAirAsia X hikes ticket prices by 40%, cut capacity by 10% as Iran war hits fuel costsSCMP Tech (Asia AI)YouTube blokkeert Nvidia s DLSS 5-video na auteursclaim Italiaanse tv-zenderTweakers.netWhat are the differences between pipelines and models in Hugging Face?discuss.huggingface.coAI Mastery Course in Telugu: Hands-On Training with Real ProjectsDev.to AIHow I'm Running Autonomous AI Agents That Actually Earn USDCDev.to AIUnderstanding NLP Token Classification: From Basics to Real-World ApplicationsMedium AIGPT-5.4 Scored 75% on a Test That Measures Real Human Work. My Data Team Scored 72%.Medium AIBizNode Workflow Marketplace: chain multiple bot handles into multi-step pipelines. Client onboarding, contract-to-payment,...Dev.to AITop Artificial Intelligence Development Companies in Dubai, UAE (2026 Edition)Medium AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessRightNow AI Releases AutoKernel: An Open-Source Framework that Applies an Autonomous Agent Loop to GPU Kernel Optimization for Arbitrary PyTorch ModelsMarkTechPostChinese AI rivals clash over Anthropic’s OpenClaw exit amid global token crunchSCMP Tech (Asia AI)India turns to Iran for oil and gas after 7-year hiatus, signaling limits to U.S. tiltCNBC TechnologyAirAsia X hikes ticket prices by 40%, cut capacity by 10% as Iran war hits fuel costsSCMP Tech (Asia AI)YouTube blokkeert Nvidia s DLSS 5-video na auteursclaim Italiaanse tv-zenderTweakers.netWhat are the differences between pipelines and models in Hugging Face?discuss.huggingface.coAI Mastery Course in Telugu: Hands-On Training with Real ProjectsDev.to AIHow I'm Running Autonomous AI Agents That Actually Earn USDCDev.to AIUnderstanding NLP Token Classification: From Basics to Real-World ApplicationsMedium AIGPT-5.4 Scored 75% on a Test That Measures Real Human Work. My Data Team Scored 72%.Medium AIBizNode Workflow Marketplace: chain multiple bot handles into multi-step pipelines. Client onboarding, contract-to-payment,...Dev.to AITop Artificial Intelligence Development Companies in Dubai, UAE (2026 Edition)Medium AI
AI NEWS HUBbyEIGENVECTOREigenvector

An Initial Exploration of Contrastive Prompt Tuning to Generate Energy-Efficient Code

arXiv cs.LGby Sophie Weidmann, Fernando CastorApril 6, 20261 min read0 views
Source Quiz

arXiv:2604.02352v1 Announce Type: new Abstract: Although LLMs are capable of generating functionally correct code, they also tend to produce less energy-efficient code in comparison to human-written solutions. As these inefficiencies lead to higher computational overhead, they are in direct conflict with Green Software Development (GSD) efforts, which aim to reduce the energy consumption of code. To support these efforts, this study aims to investigate whether and how LLMs can be optimized to promote the generation of energy-efficient code. To this end, we employ Contrastive Prompt Tuning (CPT). CPT combines Contrastive Learning techniques, which help the model to distinguish between efficient and inefficient code, and Prompt Tuning, a Parameter-Efficient Fine Tuning (PEFT) approach that r

View PDF

Abstract:Although LLMs are capable of generating functionally correct code, they also tend to produce less energy-efficient code in comparison to human-written solutions. As these inefficiencies lead to higher computational overhead, they are in direct conflict with Green Software Development (GSD) efforts, which aim to reduce the energy consumption of code. To support these efforts, this study aims to investigate whether and how LLMs can be optimized to promote the generation of energy-efficient code. To this end, we employ Contrastive Prompt Tuning (CPT). CPT combines Contrastive Learning techniques, which help the model to distinguish between efficient and inefficient code, and Prompt Tuning, a Parameter-Efficient Fine Tuning (PEFT) approach that requires only a fraction of the cost of traditional fine tuning. This study evaluates CPT on Python, Java and C++ coding problems across three different models to provide a comprehensive evaluation. The method achieves consistent improvements in code accuracy for two models but efficiency gains vary by model, language and task complexity, indicating that improvements are not uniformly reliable.

Comments: Published at the Third International Workshop on Large Language Models for Code (LLM4Code 2026)

Subjects:

Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Software Engineering (cs.SE)

Cite as: arXiv:2604.02352 [cs.LG]

(or arXiv:2604.02352v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2604.02352

arXiv-issued DOI via DataCite

Submission history

From: Fernando Castor [view email] [v1] Tue, 3 Mar 2026 12:36:15 UTC (659 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelannouncevaluation

Knowledge Map

Knowledge Map
TopicsEntitiesSource
An Initial …modelannouncevaluationstudyarxivarXiv cs.LG

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 311 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models