Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessAnthropic employee error exposes Claude Code source - InfoWorldGoogle News: ClaudeCommunity Without Tokens: What AI Dev Tools Can Learn from Crypto's Community PlaybookDev.to AIGarry Tan's gstack: Install This 56k-Star 'Virtual Team' for Claude CodeDev.to AIA Step-by-Step Guide to K-Nearest Neighbors (KNN) in Machine LearningDev.to AIOil prices extend gains after record monthly rally as Iran war fuels supply worriesCNBC TechnologyWhy Your "AI Assistant" is Obsolete: Welcoming the Era of Agentic Workflows & MCPDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIHow to Create Viral Videos with AI in 2026Dev.to AIEmbers of Autoregression: Understanding Large Language Models Through theProblem They are Trained to SolveDev.to AIBuilding the Payment Gateway for AI Agents: A Technical Deep DiveDev.to AIOpenClaw is incredible until you deploy it wrongDev.to AIWhy Most Frontend Apps Are Smarter Than Their Engineers RealizeDev.to AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessAnthropic employee error exposes Claude Code source - InfoWorldGoogle News: ClaudeCommunity Without Tokens: What AI Dev Tools Can Learn from Crypto's Community PlaybookDev.to AIGarry Tan's gstack: Install This 56k-Star 'Virtual Team' for Claude CodeDev.to AIA Step-by-Step Guide to K-Nearest Neighbors (KNN) in Machine LearningDev.to AIOil prices extend gains after record monthly rally as Iran war fuels supply worriesCNBC TechnologyWhy Your "AI Assistant" is Obsolete: Welcoming the Era of Agentic Workflows & MCPDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIHow to Create Viral Videos with AI in 2026Dev.to AIEmbers of Autoregression: Understanding Large Language Models Through theProblem They are Trained to SolveDev.to AIBuilding the Payment Gateway for AI Agents: A Technical Deep DiveDev.to AIOpenClaw is incredible until you deploy it wrongDev.to AIWhy Most Frontend Apps Are Smarter Than Their Engineers RealizeDev.to AI

MOOZY: A Patient-First Foundation Model for Computational Pathology

HuggingFace PapersMarch 27, 20268 min read0 views
Source Quiz

A patient-first pathology foundation model named MOOZY uses a case transformer to model dependencies across multiple slides from the same patient, achieving superior performance on diverse clinical tasks through open, reproducible pretraining. (0 upvotes on HuggingFace)

Published on Mar 27

Authors:

,

,

Abstract

A patient-first pathology foundation model named MOOZY uses a case transformer to model dependencies across multiple slides from the same patient, achieving superior performance on diverse clinical tasks through open, reproducible pretraining.

AI-generated summary

Computational pathology needs whole-slide image (WSI) foundation models that transfer across diverse clinical tasks, yet current approaches remain largely slide-centric, often depend on private data and expensive paired-report supervision, and do not explicitly model relationships among multiple slides from the same patient. We present MOOZY, a patient-first pathology foundation model in which the patient case, not the individual slide, is the core unit of representation. MOOZY explicitly models dependencies across all slides from the same patient via a case transformer during pretraining, combining multi-stage open self-supervision with scaled low-cost task supervision. In Stage 1, we pretrain a vision-only slide encoder on 77,134 public slide feature grids using masked self-distillation. In Stage 2, we align these representations with clinical semantics using a case transformer and multi-task supervision over 333 tasks from 56 public datasets, including 205 classification and 128 survival tasks across four endpoints. Across eight held-out tasks with five-fold frozen-feature probe evaluation, MOOZY achieves best or tied-best performance on most metrics and improves macro averages over TITAN by +7.37%, +5.50%, and +7.83% and over PRISM by +8.83%, +10.70%, and +9.78% for weighted F1, weighted ROC-AUC, and balanced accuracy, respectively. MOOZY is also parameter efficient with 85.77M parameters, 14x smaller than GigaPath. These results demonstrate that open, reproducible patient-level pretraining yields transferable embeddings, providing a practical path toward scalable patient-first histopathology foundation models.

View arXiv page View PDF Project page GitHub 5 Add to collection

Get this paper in your agent:

hf papers read 2603.27048

Don't have the latest CLI?

curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.27048 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.27048 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

researchpaperarxiv

Knowledge Map

Knowledge Map
TopicsEntitiesSource
MOOZY: A Pa…researchpaperarxivwhole-slide…foundation …patient-fir…HuggingFace…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 105 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Research Papers