MOOZY: A Patient-First Foundation Model for Computational Pathology
A patient-first pathology foundation model named MOOZY uses a case transformer to model dependencies across multiple slides from the same patient, achieving superior performance on diverse clinical tasks through open, reproducible pretraining. (0 upvotes on HuggingFace)
Published on Mar 27
Authors:
,
,
Abstract
A patient-first pathology foundation model named MOOZY uses a case transformer to model dependencies across multiple slides from the same patient, achieving superior performance on diverse clinical tasks through open, reproducible pretraining.
AI-generated summary
Computational pathology needs whole-slide image (WSI) foundation models that transfer across diverse clinical tasks, yet current approaches remain largely slide-centric, often depend on private data and expensive paired-report supervision, and do not explicitly model relationships among multiple slides from the same patient. We present MOOZY, a patient-first pathology foundation model in which the patient case, not the individual slide, is the core unit of representation. MOOZY explicitly models dependencies across all slides from the same patient via a case transformer during pretraining, combining multi-stage open self-supervision with scaled low-cost task supervision. In Stage 1, we pretrain a vision-only slide encoder on 77,134 public slide feature grids using masked self-distillation. In Stage 2, we align these representations with clinical semantics using a case transformer and multi-task supervision over 333 tasks from 56 public datasets, including 205 classification and 128 survival tasks across four endpoints. Across eight held-out tasks with five-fold frozen-feature probe evaluation, MOOZY achieves best or tied-best performance on most metrics and improves macro averages over TITAN by +7.37%, +5.50%, and +7.83% and over PRISM by +8.83%, +10.70%, and +9.78% for weighted F1, weighted ROC-AUC, and balanced accuracy, respectively. MOOZY is also parameter efficient with 85.77M parameters, 14x smaller than GigaPath. These results demonstrate that open, reproducible patient-level pretraining yields transferable embeddings, providing a practical path toward scalable patient-first histopathology foundation models.
View arXiv page View PDF Project page GitHub 5 Add to collection
Get this paper in your agent:
hf papers read 2603.27048
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2603.27048 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2603.27048 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchpaperarxivExclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models - wsj.com
<a href="https://news.google.com/rss/articles/CBMiuANBVV95cUxPTU4tZnRTaG1rUGQ4a3l6RXdVczBjYkhlVkFTaU9BREZmY3MxMkFtcXJGckJfTDB0dndpSHVYR1JqeEdfV3VwRGRQcGtZQk5fbF9PVkhxS1pDX0wtSXdYOGVOOWZ4cEhkNTJxSFdhQ3FRdjJrSlppOFJrRHd2bUFyZDdCd193U1Q3cmFFMkNWUFh6Wmx1ZjhnRmRDaE1QVFZQeVJCb3JyYWVCbDlJY1QwcG42NS1leXRnamZGd1dXRUlUV2RybGZScGtBc1I2TDFHY0FXeW9ORV9lVzE3cWpvemlNcE0wVjVSRVd4SkJEUnlPc3VWNjB2Y2pnaGFEOGl4V28zamNEVEtsRDROMGhEbGpzc2djelJVZ2lGUjNRNGprZ0p2SWhRTnE2UVRHSW8yX3k3Zm1BcWg4NjJheGw0S0U3ZmNKeXFaRmYwSGtERFRnYzU2QUJhUElCcHFicWV5YlRGRGtHbzB6ZURRdnpaTHFDOHYtbkNQS3NTZzNwNXNJQkk5SS05N3g0bWVaN2hnVi1KLTFtMVZnZUZlN05NMTY5dGZBdmxaSVdXUXg5NEhmT0ZUYkdmcQ?oc=5" target="_blank">Exclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models</a> <font color="#6f6f6f">wsj.com</font>

Former Baidu President on AI Tokenization in China
The former President of Baidu says AI tokenization is exploding in China, far beyond what OpenClaw illustrated earlier this year. Zhang Yaqin, who runs China's Institute of AI Industry Research at Beijing's Tsinghua University speaks to Bloomberg's Chief North Asia Correspondent Stephen Engle in Beijing. (Source: Bloomberg)
Uganda To Host Climate Change, Artificial Intelligence Summit, Sept 5-6 - Independent Newspaper Nigeria
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxNcnBtdldJUERlX0dzOTJEY2sybEc2ZjZSbUtiLWIzUUhJbkQ1N3BwUWlCcV95YmZNSmFGbFQ1enE5VWJlY0JBWDhlSENlNEFNMmM5Q0hrM080V3Q2eUF3cmpkeFBXRS01YXBpRUI4Uk5KOVY5bjFaRm1GNmVudGUtNTFmVDlBMDIyNGVGaF9WTkdHTDMxY1BZcw?oc=5" target="_blank">Uganda To Host Climate Change, Artificial Intelligence Summit, Sept 5-6</a> <font color="#6f6f6f">Independent Newspaper Nigeria</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
AI could transform research assessment — and some academics are worried - Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE12VmJ3THU1WmwzcENmWFJqTVRfclJGVkhzTG9Kcm9mTm1VZnJsV2IyZGwtc21EWnZRSkRfSXM3SDRlOVZnUlhpVm9VUEMtRWRRYmNDVU1kdHg5NllvSERj?oc=5" target="_blank">AI could transform research assessment — and some academics are worried</a> <font color="#6f6f6f">Nature</font>

As AI-Generated Music Advances, Humans Still Lead in Creativity, CMU Research Finds
<p> <img loading="lazy" src="https://www.cmu.edu/news/sites/default/files/styles/listings_desktop_1x_/public/2026-01/251104A_WTM_AI-Creativity-Music102.jpg.webp?itok=uEc2ayOO" width="900" height="508" alt="A woman with long black hair is seated on the right opposite a computer screen with a small piano keyboard and computer keyboard in front of her on a desk, where a man next to her with glasses and wavy black hair operates the mouse and talks to her."> </p> AI can write songs, but still has a way to go before matching the creativity of tunes made by people, according to Carnegie Mellon University research.


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!