Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessThis International Fact-Checking Day, use these 5 tips to spot AI-generated contentFast Company TechExclusive | OpenAI Buys Tech-Industry Talk Show TBPN - WSJGoogle News: OpenAIPrediction: The $700 Billion Artificial Intelligence (AI) Capex Boom Will Create the Best Buying Opportunity of 2026 for These 3 Stocks - The Motley FoolGoogle News: AIp-e-w/gemma-4-E2B-it-heretic-ara: Gemma 4's defenses shredded by Heretic's new ARA method 90 minutes after the official releaseReddit r/LocalLLaMAFrom Assistant to Actor: What the Rise of Agentic AI Means for Your Business - Morgan LewisGoogle News: Generative AIIndia AI Startup Sarvam Raises Funds at $1.5 Billion ValuationBloomberg TechnologyApple's AI Strategy Is Pivoting. Here's Why That Could Be Great News for the Stock. - The Motley FoolGNews AI AppleThere’s a Blinking Warning Sign for the Data Centers in Space IndustryFuturism AIThe Practical Guide to Superbabieslesswrong.comWill Gemma 4 124B MoE open as well?Reddit r/LocalLLaMA🔮 Autoresearch and the experimental societyExponential ViewBlack Hat USADark ReadingBlack Hat AsiaAI BusinessThis International Fact-Checking Day, use these 5 tips to spot AI-generated contentFast Company TechExclusive | OpenAI Buys Tech-Industry Talk Show TBPN - WSJGoogle News: OpenAIPrediction: The $700 Billion Artificial Intelligence (AI) Capex Boom Will Create the Best Buying Opportunity of 2026 for These 3 Stocks - The Motley FoolGoogle News: AIp-e-w/gemma-4-E2B-it-heretic-ara: Gemma 4's defenses shredded by Heretic's new ARA method 90 minutes after the official releaseReddit r/LocalLLaMAFrom Assistant to Actor: What the Rise of Agentic AI Means for Your Business - Morgan LewisGoogle News: Generative AIIndia AI Startup Sarvam Raises Funds at $1.5 Billion ValuationBloomberg TechnologyApple's AI Strategy Is Pivoting. Here's Why That Could Be Great News for the Stock. - The Motley FoolGNews AI AppleThere’s a Blinking Warning Sign for the Data Centers in Space IndustryFuturism AIThe Practical Guide to Superbabieslesswrong.comWill Gemma 4 124B MoE open as well?Reddit r/LocalLLaMA🔮 Autoresearch and the experimental societyExponential View
AI NEWS HUBbyEIGENVECTOREigenvector

Beyond Latency: A System-Level Characterization of MPC and FHE for PPML

arXiv cs.CRby Pengzhi Huang, Kiwan Maeng, G. Edward SuhApril 2, 20262 min read0 views
Source Quiz

arXiv:2604.00169v1 Announce Type: new Abstract: Privacy protection has become an increasing concern in modern machine learning applications. Privacy-preserving machine learning (PPML) has attracted growing research attention, with approaches such as secure multiparty computation (MPC) and fully homomorphic encryption (FHE) being actively explored. However, existing evaluations of these approaches have frequently been done on a narrow, fragmented setup and only focused on a specific performance metric, such as the online inference latency of a specific batch size. From the existing reports, it is hard to compare different approaches, especially when considering other metrics like energy/cost or broader system setups (various hyperparameters, offline overheads, future hardware/network config

View PDF HTML (experimental)

Abstract:Privacy protection has become an increasing concern in modern machine learning applications. Privacy-preserving machine learning (PPML) has attracted growing research attention, with approaches such as secure multiparty computation (MPC) and fully homomorphic encryption (FHE) being actively explored. However, existing evaluations of these approaches have frequently been done on a narrow, fragmented setup and only focused on a specific performance metric, such as the online inference latency of a specific batch size. From the existing reports, it is hard to compare different approaches, especially when considering other metrics like energy/cost or broader system setups (various hyperparameters, offline overheads, future hardware/network configurations, etc.). We present a unified characterization of three popular approaches -- two variants of MPC based on arithmetic/binary sharing conversion and function secret sharing, and FHE -- on their performance and cost in performing privacy-preserving inference on multiple CNN and Transformer models. We study a range of LAN and WAN environments, model sizes, batch sizes, and input sequence lengths. We evaluate not only the performance but also the energy consumption and monetary cost of deploying under a realistic scenario, taking into account their offline and online computation/communication overheads. We provide empirical guidance for selecting, optimizing, and deploying these privacy-preserving compute paradigms, and outline how evolving hardware and network trends are likely to shift trade-offs between the two MPC schemes and FHE. This work provides system-level insights for researchers and practitioners who seek to understand or accelerate PPML workloads.

Comments: ISPASS 2026 Accepted

Subjects:

Cryptography and Security (cs.CR)

Cite as: arXiv:2604.00169 [cs.CR]

(or arXiv:2604.00169v1 [cs.CR] for this version)

https://doi.org/10.48550/arXiv.2604.00169

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Pengzhi Huang [view email] [v1] Tue, 31 Mar 2026 19:18:52 UTC (3,046 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Beyond Late…modeltransformerannounceversionapplicationvaluationarXiv cs.CR

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 169 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!