~Gemini 3.1 Pro Level Performance With Gemma4-31B Harness
submitted by /u/Ryoiki-Tokuiten [link] [comments]
Could not retrieve the full article text.
Read on Reddit r/LocalLLaMA →Reddit r/LocalLLaMA
https://www.reddit.com/r/LocalLLaMA/comments/1sdgdbq/gemini_31_pro_level_performance_with_gemma431b/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

The portability paradox of foundation models for clinical decision support
npj Digital Medicine, Published online: 07 April 2026; doi:10.1038/s41746-026-02615-4 Yakdan et al. demonstrate that foundation models (FMs) trained to predict cervical spondylotic myelopathy from electronic health record data outperform traditional models on internal datasets but lose their advantage during external validation. This suggests that the feature-dense patterns learned by FMs may reduce their portability across settings, particularly for rare outcomes. As FMs approach clinical deployment, local validation, subgroup analysis, and attention to implementation burden are essential to inform health system planning and stewardship.

The Geometric Alignment Tax: Tokenization vs. Continuous Geometry in Scientific Foundation Models
arXiv:2604.04155v1 Announce Type: cross Abstract: Foundation models for biology and physics optimize predictive accuracy, but their internal representations systematically fail to preserve the continuous geometry of the systems they model. We identify the root cause: the Geometric Alignment Tax, an intrinsic cost of forcing continuous manifolds through discrete categorical bottlenecks. Controlled ablations on synthetic dynamical systems demonstrate that replacing cross-entropy with a continuous head on an identical encoder reduces geometric distortion by up to 8.5x, while learned codebooks exhibit a non-monotonic double bind where finer quantization worsens geometry despite improving reconstruction. Under continuous objectives, three architectures differ by 1.3x; under discrete tokenizatio




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!