Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessFrom False Positives to Real Risk: AI‑Driven Compliance in Modern UC - UC TodayGoogle News: Generative AIClaude Code Leak: What Went Wrong at Anthropic? - AI MagazineGoogle News: ClaudeU.S. Reportedly Seeking Access To Three Additional Bases In Greenland, The First Expansion In DecadesInternational Business TimesAnthropic's Claude Code source code got accidentally leaked - qz.comGoogle News: ClaudeAI’s Biggest Opportunity Lies in the 92% of Work It Hasn’t Touched - PYMNTS.comGoogle News: AIWhy is gaming becoming so expensive? The answer is found in AI - The GuardianGoogle News: AIChoosing the Right Model is Hard. Maintaining Accuracy is Harder.AI YouTube Channel 24A YouTuber channeled his distaste for the PS5’s design into slick console coversThe Verge AILess than a month: StrictlyVC San Francisco brings leaders from TDK Ventures, Replit, and more togetherTechCrunch AIThe Strange, Shaky Alliance Taking on Trump and His Big Tech Friends - PoliticoGoogle News: AI SafetyI Asked ChatGPT If It Was A Psychopath—Here’s What It Said - ForbesGoogle News: ChatGPTGoogle’s TurboQuant Marks A Turning Point In AI’s Evolution - ForbesGoogle News: LLMBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessFrom False Positives to Real Risk: AI‑Driven Compliance in Modern UC - UC TodayGoogle News: Generative AIClaude Code Leak: What Went Wrong at Anthropic? - AI MagazineGoogle News: ClaudeU.S. Reportedly Seeking Access To Three Additional Bases In Greenland, The First Expansion In DecadesInternational Business TimesAnthropic's Claude Code source code got accidentally leaked - qz.comGoogle News: ClaudeAI’s Biggest Opportunity Lies in the 92% of Work It Hasn’t Touched - PYMNTS.comGoogle News: AIWhy is gaming becoming so expensive? The answer is found in AI - The GuardianGoogle News: AIChoosing the Right Model is Hard. Maintaining Accuracy is Harder.AI YouTube Channel 24A YouTuber channeled his distaste for the PS5’s design into slick console coversThe Verge AILess than a month: StrictlyVC San Francisco brings leaders from TDK Ventures, Replit, and more togetherTechCrunch AIThe Strange, Shaky Alliance Taking on Trump and His Big Tech Friends - PoliticoGoogle News: AI SafetyI Asked ChatGPT If It Was A Psychopath—Here’s What It Said - ForbesGoogle News: ChatGPTGoogle’s TurboQuant Marks A Turning Point In AI’s Evolution - ForbesGoogle News: LLM

\texttt{ReproMIA}: A Comprehensive Analysis of Model Reprogramming for Proactive Membership Inference Attacks

arXiv cs.LGby Chihan Huang, Huaijin Wang, Shuai WangApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.28942v1 Announce Type: new Abstract: The pervasive deployment of deep learning models across critical domains has concurrently intensified privacy concerns due to their inherent propensity for data memorization. While Membership Inference Attacks (MIAs) serve as the gold standard for auditing these privacy vulnerabilities, conventional MIA paradigms are increasingly constrained by the prohibitive computational costs of shadow model training and a precipitous performance degradation under low False Positive Rate constraints. To overcome these challenges, we introduce a novel perspective by leveraging the principles of model reprogramming as an active signal amplifier for privacy leakage. Building upon this insight, we present \texttt{ReproMIA}, a unified and efficient proactive f

View PDF HTML (experimental)

Abstract:The pervasive deployment of deep learning models across critical domains has concurrently intensified privacy concerns due to their inherent propensity for data memorization. While Membership Inference Attacks (MIAs) serve as the gold standard for auditing these privacy vulnerabilities, conventional MIA paradigms are increasingly constrained by the prohibitive computational costs of shadow model training and a precipitous performance degradation under low False Positive Rate constraints. To overcome these challenges, we introduce a novel perspective by leveraging the principles of model reprogramming as an active signal amplifier for privacy leakage. Building upon this insight, we present \texttt{ReproMIA}, a unified and efficient proactive framework for membership inference. We rigorously substantiate, both theoretically and empirically, how our methodology proactively induces and magnifies latent privacy footprints embedded within the model's representations. We provide specialized instantiations of \texttt{ReproMIA} across diverse architectural paradigms, including LLMs, Diffusion Models, and Classification Models. Comprehensive experimental evaluations across more than ten benchmarks and a variety of model architectures demonstrate that \texttt{ReproMIA} consistently and substantially outperforms existing state-of-the-art baselines, achieving a transformative leap in performance specifically within low-FPR regimes, such as an average of 5.25% AUC and 10.68% TPR@1%FPR increase over the runner-up for LLMs, as well as 3.70% and 12.40% respectively for Diffusion Models.

Subjects:

Machine Learning (cs.LG); Cryptography and Security (cs.CR)

Cite as: arXiv:2603.28942 [cs.LG]

(or arXiv:2603.28942v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2603.28942

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Chihan Huang [view email] [v1] Mon, 30 Mar 2026 19:35:10 UTC (1,572 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelbenchmarktraining

Knowledge Map

Knowledge Map
TopicsEntitiesSource
\texttt{Rep…modelbenchmarktrainingannouncevaluationanalysisarXiv cs.LG

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 212 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models