Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessFirst-Time Payees, Payouts, and Why Clean Transactions Still Turn Into Fraud LossesDEV CommunityHandling Extreme Class Imbalance in Fraud DetectionDEV CommunityAntropic's Claude Code leaked and Axios NPM InflitrationDEV CommunityReal-Time Fraud Scoring Latency: What 47ms Actually MeansDEV CommunityPause, Save, Resume: The Definitive Guide to StashingDEV CommunitySouth Korean trade data: chip shipments hit a record-high value of $32.83B in March 2026, up 151.4% YoY, pushing total exports to a record $86.13B, up 48.3% YoY (Steven Borowiec/Nikkei Asia)Techmeme5 Rust patterns that replaced my Python scriptsDEV CommunityI automated my entire dev workflow with Claude Code hooksDEV CommunityHugging Face Releases TRL v1.0: A Unified Post-Training Stack for SFT, Reward Modeling, DPO, and GRPO WorkflowsMarkTechPostQ2, Day 1: When Concepts Have to Become CodeDEV CommunityClaude Code source leak reveals how much info Anthropic can hoover up about you and your systemThe Register AI/MLProgress adds AI search & personalisation to Sitefinity - IT Brief AsiaGoogle News: Generative AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessFirst-Time Payees, Payouts, and Why Clean Transactions Still Turn Into Fraud LossesDEV CommunityHandling Extreme Class Imbalance in Fraud DetectionDEV CommunityAntropic's Claude Code leaked and Axios NPM InflitrationDEV CommunityReal-Time Fraud Scoring Latency: What 47ms Actually MeansDEV CommunityPause, Save, Resume: The Definitive Guide to StashingDEV CommunitySouth Korean trade data: chip shipments hit a record-high value of $32.83B in March 2026, up 151.4% YoY, pushing total exports to a record $86.13B, up 48.3% YoY (Steven Borowiec/Nikkei Asia)Techmeme5 Rust patterns that replaced my Python scriptsDEV CommunityI automated my entire dev workflow with Claude Code hooksDEV CommunityHugging Face Releases TRL v1.0: A Unified Post-Training Stack for SFT, Reward Modeling, DPO, and GRPO WorkflowsMarkTechPostQ2, Day 1: When Concepts Have to Become CodeDEV CommunityClaude Code source leak reveals how much info Anthropic can hoover up about you and your systemThe Register AI/MLProgress adds AI search & personalisation to Sitefinity - IT Brief AsiaGoogle News: Generative AI

The Surprising Effectiveness of Noise Pretraining for Implicit Neural Representations

arXiv eess.IVby Kushal Vyas, Alper Kayabasi, Daniel Kim, Vishwanath Saragadam, Ashok Veeraraghavan, Guha BalakrishnanApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.29034v1 Announce Type: cross Abstract: The approximation and convergence properties of implicit neural representations (INRs) are known to be highly sensitive to parameter initialization strategies. While several data-driven initialization methods demonstrate significant improvements over standard random sampling, the reasons for their success -- specifically, whether they encode classical statistical signal priors or more complex features -- remain poorly understood. In this study, we explore this phenomenon through a series of experimental analyses leveraging noise pretraining. We pretrain INRs on diverse noise classes (e.g., Gaussian, Dead Leaves, Spectral) and measure their ability to both fit unseen signals and encode priors for an inverse imaging task (denoising). Our anal

View PDF HTML (experimental)

Abstract:The approximation and convergence properties of implicit neural representations (INRs) are known to be highly sensitive to parameter initialization strategies. While several data-driven initialization methods demonstrate significant improvements over standard random sampling, the reasons for their success -- specifically, whether they encode classical statistical signal priors or more complex features -- remain poorly understood. In this study, we explore this phenomenon through a series of experimental analyses leveraging noise pretraining. We pretrain INRs on diverse noise classes (e.g., Gaussian, Dead Leaves, Spectral) and measure their ability to both fit unseen signals and encode priors for an inverse imaging task (denoising). Our analyses on image and video data reveal a surprising finding: simply pretraining on unstructured noise (Uniform, Gaussian) dramatically improves signal fitting capacity compared to all other baselines. However, unstructured noise also yields poor deep image priors for denoising. In contrast, we also find that noise with the classic $1/|f^\alpha|$ spectral structure of natural images achieves an excellent balance of signal fitting and inverse imaging capabilities, performing on par with the best data-driven initialization methods. This finding enables more efficient INR training in applications lacking sufficient prior domain-specific data. For more details, visit project page at this https URL

Comments: Accepted to CVPR 2026. Project page: this https URL

Subjects:

Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)

Cite as: arXiv:2603.29034 [cs.CV]

(or arXiv:2603.29034v1 [cs.CV] for this version)

https://doi.org/10.48550/arXiv.2603.29034

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Kushal Vyas [view email] [v1] Mon, 30 Mar 2026 22:01:00 UTC (38,220 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

trainingannounceapplication

Knowledge Map

Knowledge Map
TopicsEntitiesSource
The Surpris…trainingannounceapplicationfeaturestudyarxivarXiv eess.…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 322 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products