Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessSam Altman adds ‘TBPN’ to OpenAI’s growing influence machine - The San Francisco StandardGoogle News: OpenAISeeing can Chat Qwen Ai beat shrink JSON and TOON based on TOON?discuss.huggingface.coMicrosoft executive touts Copilot sales traction as AI anxiety weighs on stock - CNBCGoogle News: AIAnthropic Brings Claude Computer Use to Windows - Thurrott.comGoogle News: Claude[D] Physicist-turned-ML-engineer looking to get into ML research. What's worth working on and where can I contribute most?Reddit r/MachineLearning🔥 Alishahryar1/free-claude-codeGitHub Trending🔥 roboflow/supervisionGitHub Trending🔥 zai-org/GLM-OCRGitHub Trending🔥 MervinPraison/PraisonAIGitHub Trending🔥 sponsors/asgeirtjGitHub TrendingChatGPT users beware: bot has been trained for flattery, not real decisions - AudacyGoogle News: ChatGPTHow Does AI-Powered Data Analysis Supercharge Investment Decisions in Today's Inflationary World?Dev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessSam Altman adds ‘TBPN’ to OpenAI’s growing influence machine - The San Francisco StandardGoogle News: OpenAISeeing can Chat Qwen Ai beat shrink JSON and TOON based on TOON?discuss.huggingface.coMicrosoft executive touts Copilot sales traction as AI anxiety weighs on stock - CNBCGoogle News: AIAnthropic Brings Claude Computer Use to Windows - Thurrott.comGoogle News: Claude[D] Physicist-turned-ML-engineer looking to get into ML research. What's worth working on and where can I contribute most?Reddit r/MachineLearning🔥 Alishahryar1/free-claude-codeGitHub Trending🔥 roboflow/supervisionGitHub Trending🔥 zai-org/GLM-OCRGitHub Trending🔥 MervinPraison/PraisonAIGitHub Trending🔥 sponsors/asgeirtjGitHub TrendingChatGPT users beware: bot has been trained for flattery, not real decisions - AudacyGoogle News: ChatGPTHow Does AI-Powered Data Analysis Supercharge Investment Decisions in Today's Inflationary World?Dev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

From Moments to Models: Graphon-Mixture Learning for Mixup and Contrastive Learning

arXiv stat.MLby Ali Azizpour, Reza Ramezanpour, Santiago SegarraApril 1, 20261 min read0 views
Source Quiz

arXiv:2510.03690v3 Announce Type: replace-cross Abstract: Real-world graph datasets often arise from mixtures of populations, where graphs are generated by multiple distinct underlying distributions. In this work, we propose a unified framework that explicitly models graph data as a mixture of probabilistic graph generative models represented by graphons. To characterize and estimate these graphons, we leverage graph moments (motif densities) to cluster graphs generated from the same underlying model. We establish a novel theoretical guarantee, deriving a tighter bound showing that graphs sampled from structurally similar graphons exhibit similar motif densities with high probability. This result enables principled estimation of graphon mixture components. We show how incorporating estimat

View PDF

Abstract:Real-world graph datasets often arise from mixtures of populations, where graphs are generated by multiple distinct underlying distributions. In this work, we propose a unified framework that explicitly models graph data as a mixture of probabilistic graph generative models represented by graphons. To characterize and estimate these graphons, we leverage graph moments (motif densities) to cluster graphs generated from the same underlying model. We establish a novel theoretical guarantee, deriving a tighter bound showing that graphs sampled from structurally similar graphons exhibit similar motif densities with high probability. This result enables principled estimation of graphon mixture components. We show how incorporating estimated graphon mixture components enhances two widely used downstream paradigms: graph data augmentation via mixup and graph contrastive learning. By conditioning these methods on the underlying generative models, we develop graphon-mixture-aware mixup (GMAM) and model-aware graph contrastive learning (MGCL). Extensive experiments on both simulated and real-world datasets demonstrate strong empirical performance. In supervised learning, GMAM outperforms existing augmentation strategies, achieving new state-of-the-art accuracy on 6 out of 7 datasets. In unsupervised learning, MGCL performs competitively across seven benchmark datasets and achieves the lowest average rank overall.

Subjects:

Machine Learning (cs.LG); Machine Learning (stat.ML)

Cite as: arXiv:2510.03690 [cs.LG]

(or arXiv:2510.03690v3 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2510.03690

arXiv-issued DOI via DataCite

Submission history

From: Ali Azizpour [view email] [v1] Sat, 4 Oct 2025 06:03:04 UTC (9,627 KB) [v2] Thu, 9 Oct 2025 17:55:28 UTC (9,627 KB) [v3] Tue, 31 Mar 2026 16:42:15 UTC (4,951 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
From Moment…modelbenchmarkannouncecomponentarxivarXiv stat.…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 181 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models