From Moments to Models: Graphon-Mixture Learning for Mixup and Contrastive Learning
arXiv:2510.03690v3 Announce Type: replace-cross Abstract: Real-world graph datasets often arise from mixtures of populations, where graphs are generated by multiple distinct underlying distributions. In this work, we propose a unified framework that explicitly models graph data as a mixture of probabilistic graph generative models represented by graphons. To characterize and estimate these graphons, we leverage graph moments (motif densities) to cluster graphs generated from the same underlying model. We establish a novel theoretical guarantee, deriving a tighter bound showing that graphs sampled from structurally similar graphons exhibit similar motif densities with high probability. This result enables principled estimation of graphon mixture components. We show how incorporating estimat
View PDF
Abstract:Real-world graph datasets often arise from mixtures of populations, where graphs are generated by multiple distinct underlying distributions. In this work, we propose a unified framework that explicitly models graph data as a mixture of probabilistic graph generative models represented by graphons. To characterize and estimate these graphons, we leverage graph moments (motif densities) to cluster graphs generated from the same underlying model. We establish a novel theoretical guarantee, deriving a tighter bound showing that graphs sampled from structurally similar graphons exhibit similar motif densities with high probability. This result enables principled estimation of graphon mixture components. We show how incorporating estimated graphon mixture components enhances two widely used downstream paradigms: graph data augmentation via mixup and graph contrastive learning. By conditioning these methods on the underlying generative models, we develop graphon-mixture-aware mixup (GMAM) and model-aware graph contrastive learning (MGCL). Extensive experiments on both simulated and real-world datasets demonstrate strong empirical performance. In supervised learning, GMAM outperforms existing augmentation strategies, achieving new state-of-the-art accuracy on 6 out of 7 datasets. In unsupervised learning, MGCL performs competitively across seven benchmark datasets and achieves the lowest average rank overall.
Subjects:
Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:2510.03690 [cs.LG]
(or arXiv:2510.03690v3 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2510.03690
arXiv-issued DOI via DataCite
Submission history
From: Ali Azizpour [view email] [v1] Sat, 4 Oct 2025 06:03:04 UTC (9,627 KB) [v2] Thu, 9 Oct 2025 17:55:28 UTC (9,627 KB) [v3] Tue, 31 Mar 2026 16:42:15 UTC (4,951 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelbenchmarkannounce
Seeing can Chat Qwen Ai beat shrink JSON and TOON based on TOON?
So Chat Qwen Ai can you beat Meta Ai and Chatgpt? My Analysis as Qwen Great discussion! Both Meta AI and ChatGPT provided solid technical answers. Let me add my perspective and fill in some critical gaps: Reality Check on the Numbers The progression shown is accurate : JSON (404 bytes) : Verbose, repeated keys TOON (171 bytes) : ~58% reduction by removing key repetition Binary TOON (~70-95 bytes) : Another ~45-55% reduction Important caveat : TOON excels with flat, tabular data but can actually use more tokens than JSON for deeply nested structures [[6]]. What ChatGPT Got Right Schema externalization = biggest win (removes field names entirely) Dictionary encoding = huge for repeated strings Varint encoding = efficient for small integers “Protobuf-level” = schema + binary + deterministic p
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Seeing can Chat Qwen Ai beat shrink JSON and TOON based on TOON?
So Chat Qwen Ai can you beat Meta Ai and Chatgpt? My Analysis as Qwen Great discussion! Both Meta AI and ChatGPT provided solid technical answers. Let me add my perspective and fill in some critical gaps: Reality Check on the Numbers The progression shown is accurate : JSON (404 bytes) : Verbose, repeated keys TOON (171 bytes) : ~58% reduction by removing key repetition Binary TOON (~70-95 bytes) : Another ~45-55% reduction Important caveat : TOON excels with flat, tabular data but can actually use more tokens than JSON for deeply nested structures [[6]]. What ChatGPT Got Right Schema externalization = biggest win (removes field names entirely) Dictionary encoding = huge for repeated strings Varint encoding = efficient for small integers “Protobuf-level” = schema + binary + deterministic p





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!