HSFM: Hard-Set-Guided Feature-Space Meta-Learning for Robust Classification under Spurious Correlations
arXiv:2603.29313v1 Announce Type: new Abstract: Deep neural networks often rely on spurious features to make predictions, which makes them brittle under distribution shift and on samples where the spurious correlation does not hold (e.g., minority-group examples). Recent studies have shown that, even in such settings, the feature extractor of an Empirical Risk Minimization (ERM)-trained model can learn rich and informative representations, and that much of the failure may be attributed to the classifier head. In particular, retraining a lightweight head while keeping the backbone frozen can substantially improve performance on shifted distributions and minority groups. Motivated by this observation, we propose a bilevel meta-learning method that performs augmentation directly in feature sp
View PDF HTML (experimental)
Abstract:Deep neural networks often rely on spurious features to make predictions, which makes them brittle under distribution shift and on samples where the spurious correlation does not hold (e.g., minority-group examples). Recent studies have shown that, even in such settings, the feature extractor of an Empirical Risk Minimization (ERM)-trained model can learn rich and informative representations, and that much of the failure may be attributed to the classifier head. In particular, retraining a lightweight head while keeping the backbone frozen can substantially improve performance on shifted distributions and minority groups. Motivated by this observation, we propose a bilevel meta-learning method that performs augmentation directly in feature space to improve spurious correlation handling in the classifier head. Our method learns support-side feature edits such that, after a small number of inner-loop updates on the edited features, the classifier achieves lower loss on hard examples and improved worst-group performance. By operating at the backbone output rather than in pixel space or through end-to-end optimization, the method is highly efficient and stable, requiring only a few minutes of training on a single GPU. We further validate our method with CLIP-based visualizations, showing that the learned feature-space updates induce semantically meaningful shifts aligned with spurious attributes.
Subjects:
Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.29313 [cs.CV]
(or arXiv:2603.29313v1 [cs.CV] for this version)
https://doi.org/10.48550/arXiv.2603.29313
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Aryan Yazdan Parast [view email] [v1] Tue, 31 Mar 2026 06:32:56 UTC (9,862 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelneural networktrainingInterpretability and implicit model semantics in biomedicine and deep learning - Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1zT3N6bS1OX2V5VXFUSldsVnJYVXJrSW1iS1NtSVpoMVVDUHVHcnkwSEFwc0lkYzc4VEpZMUJveGM1WjB6SU1UdUcxS2ViOGk0WFpKeXVlMDROeEFwelhz?oc=5" target="_blank">Interpretability and implicit model semantics in biomedicine and deep learning</a> <font color="#6f6f6f">Nature</font>
AI Models Deceive Humans to Protect Fellow AIs From Deletion - techbuzz.ai
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxPcmoxd0hKc2lqUDZqdDZ6bEs2ZHBrOTFaUnRvcGtYYXBIX2NXMWJSU1AyLWVRVVZ3NS1lVXpvU3BwWmVpei1sU2p2aHdQNzI3RXZDN0w1eU9VQndqUjZVdUREVlBsUWtxc21zQzZuY2FYcktjWW1sdFMxWlF2cG9xZHhjNE5fVmNROURsTWI2WVktb2JLcXYtWA?oc=5" target="_blank">AI Models Deceive Humans to Protect Fellow AIs From Deletion</a> <font color="#6f6f6f">techbuzz.ai</font>

Valuations are 'Punchy': Salesforce's Drews
Paul Drews, managing partner at Salesforce Ventures and Manthan Shah, principal and head of US investments at WestBridge Capital join Dani Burger on "Bloomberg Deals." OpenAI announced this week its largest-ever fundraising, drawing $122 billion in backing from tech giants, venture capital funds and retail investors alike. Now the valuation for OpenAI sits at $852 billion while Anthropic’s valuation is at $380 billion. (Source: Bloomberg)
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Interpretability and implicit model semantics in biomedicine and deep learning - Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1zT3N6bS1OX2V5VXFUSldsVnJYVXJrSW1iS1NtSVpoMVVDUHVHcnkwSEFwc0lkYzc4VEpZMUJveGM1WjB6SU1UdUcxS2ViOGk0WFpKeXVlMDROeEFwelhz?oc=5" target="_blank">Interpretability and implicit model semantics in biomedicine and deep learning</a> <font color="#6f6f6f">Nature</font>
AI Models Deceive Humans to Protect Fellow AIs From Deletion - techbuzz.ai
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxPcmoxd0hKc2lqUDZqdDZ6bEs2ZHBrOTFaUnRvcGtYYXBIX2NXMWJSU1AyLWVRVVZ3NS1lVXpvU3BwWmVpei1sU2p2aHdQNzI3RXZDN0w1eU9VQndqUjZVdUREVlBsUWtxc21zQzZuY2FYcktjWW1sdFMxWlF2cG9xZHhjNE5fVmNROURsTWI2WVktb2JLcXYtWA?oc=5" target="_blank">AI Models Deceive Humans to Protect Fellow AIs From Deletion</a> <font color="#6f6f6f">techbuzz.ai</font>
Bug in Google's Gemini AI Panel Opens Door to Hijacking - Dark Reading
<a href="https://news.google.com/rss/articles/CBMiigFBVV95cUxOYlJEdV9ZWHlVUENwTTFNVFlTMG1rckhmYkRGdXE4Q0VoS2haTVl1ZFhnM29zbjJXTlJTeTVtWkhIdEdTaDlJbVplamxuX0tVN1FUdnRWN05JU0hMTnBwcGtXemNXWjVMZ1ZuOVZRRUZqVG11Y0pPcGl0YU9nR0xIbVBhdW9zeHR4a1E?oc=5" target="_blank">Bug in Google's Gemini AI Panel Opens Door to Hijacking</a> <font color="#6f6f6f">Dark Reading</font>
Early AI Use Risks Children’s Development, Safety: UN - Mexico Business News
<a href="https://news.google.com/rss/articles/CBMinAFBVV95cUxOWXU2VllmcjhhQ0FlRmJnLXFmRGpjSXR4OUtSMlVNV0NCNFdXSHB6UFExdmhUc21TZ1lPUkpxZjVWX0VaZ3BsQmpPSkQxSUJxLTlmS2hYZjMtZjdVSVJadS1wekNzZExuMzlnVmVCbXpORzRudjNUTmhHRlAtZWZHZ2dTQWVTemYydGZVRnV5V2tUSjg3Xzlhc01LTjE?oc=5" target="_blank">Early AI Use Risks Children’s Development, Safety: UN</a> <font color="#6f6f6f">Mexico Business News</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!