Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessApple turns 50: 8 of the company’s biggest tech milestonesSilicon RepublicI Built an AI Agent That Can Write Its Own Tools When It Gets StuckDEV CommunityBuilding a "Soft Sensor" for Cement Kilns: Predicting Control Levers with PythonDEV CommunityWe Traced One Query Through Perplexity’s Entire Stack in Cohort – Here’s What Actually Happens in 3 SecondsDEV CommunityAgent Self-Discovery: How AI Agents Find Their Own WalletsDEV CommunityYour content pipeline is lying to you, and in regulated software, that's a serious problemDEV CommunityDiffusion-based AI model successfully trained in electroplatingPhys.org AIClaude Code hooks: how to intercept every tool call before it runsDEV CommunityHow I built a browser-based video editor with FFmpeg.wasm (no backend, no server costs)DEV CommunityWhy We Built an API for Spanish Fiscal ID Validation Instead of Just Implementing ItDEV CommunityA technical deep-dive into building APEX: an autonomous AI operations system on OpenClawDEV CommunityBest Amazon Spring Sale laptop deals 2026ZDNet AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessApple turns 50: 8 of the company’s biggest tech milestonesSilicon RepublicI Built an AI Agent That Can Write Its Own Tools When It Gets StuckDEV CommunityBuilding a "Soft Sensor" for Cement Kilns: Predicting Control Levers with PythonDEV CommunityWe Traced One Query Through Perplexity’s Entire Stack in Cohort – Here’s What Actually Happens in 3 SecondsDEV CommunityAgent Self-Discovery: How AI Agents Find Their Own WalletsDEV CommunityYour content pipeline is lying to you, and in regulated software, that's a serious problemDEV CommunityDiffusion-based AI model successfully trained in electroplatingPhys.org AIClaude Code hooks: how to intercept every tool call before it runsDEV CommunityHow I built a browser-based video editor with FFmpeg.wasm (no backend, no server costs)DEV CommunityWhy We Built an API for Spanish Fiscal ID Validation Instead of Just Implementing ItDEV CommunityA technical deep-dive into building APEX: an autonomous AI operations system on OpenClawDEV CommunityBest Amazon Spring Sale laptop deals 2026ZDNet AI

LaSM: Layer-wise Scaling Mechanism for Defending Pop-up Attack on GUI Agents

arXiv cs.CRby Zihe Yan, Zhuosheng Zhang, Jiaping Gui, Gongshen LiuApril 1, 20261 min read0 views
Source Quiz

arXiv:2507.10610v2 Announce Type: replace Abstract: Graphical user interface (GUI) agents built on multimodal large language models (MLLMs) have recently demonstrated strong decision-making abilities in screen-based interaction tasks. However, they remain highly vulnerable to pop-up-based environmental injection attacks, where malicious visual elements divert model attention and lead to unsafe or incorrect actions. Existing defense methods either require costly retraining or perform poorly under inductive interference. In this work, we systematically study how such attacks alter the attention behavior of GUI agents and uncover a layer-wise attention divergence pattern between correct and incorrect outputs. Based on this insight, we propose \textbf{LaSM}, a \textit{Layer-wise Scaling Mechan

View PDF HTML (experimental)

Abstract:Graphical user interface (GUI) agents built on multimodal large language models (MLLMs) have recently demonstrated strong decision-making abilities in screen-based interaction tasks. However, they remain highly vulnerable to pop-up-based environmental injection attacks, where malicious visual elements divert model attention and lead to unsafe or incorrect actions. Existing defense methods either require costly retraining or perform poorly under inductive interference. In this work, we systematically study how such attacks alter the attention behavior of GUI agents and uncover a layer-wise attention divergence pattern between correct and incorrect outputs. Based on this insight, we propose \textbf{LaSM}, a \textit{Layer-wise Scaling Mechanism} that selectively amplifies attention and MLP modules in critical layers. LaSM improves the alignment between model saliency and task-relevant regions without additional training. Extensive experiments across multiple datasets demonstrate that our method significantly improves the defense success rate and exhibits strong robustness, while having negligible impact on the model's general capabilities. Our findings reveal that attention misalignment is a core vulnerability in MLLM agents and can be effectively addressed through selective layer-wise modulation. Our code can be found in this https URL.

Subjects:

Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI)

Cite as: arXiv:2507.10610 [cs.CR]

(or arXiv:2507.10610v2 [cs.CR] for this version)

https://doi.org/10.48550/arXiv.2507.10610

arXiv-issued DOI via DataCite

Submission history

From: Zihe Yan [view email] [v1] Sun, 13 Jul 2025 08:36:09 UTC (3,121 KB) [v2] Tue, 31 Mar 2026 08:10:46 UTC (19,066 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modeltraining

Knowledge Map

Knowledge Map
TopicsEntitiesSource
LaSM: Layer…modellanguage mo…trainingannounceinsightstudyarXiv cs.CR

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 151 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models