ChartNet: A Million-Scale, High-Quality Multimodal Dataset for Robust Chart Understanding
ChartNet is a large-scale multimodal dataset featuring 1.5 million chart samples with aligned visual, textual, and numerical components, designed to enhance chart interpretation and reasoning capabilities in multimodal models. (7 upvotes on HuggingFace)
Published on Mar 28
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
Abstract
ChartNet is a large-scale multimodal dataset featuring 1.5 million chart samples with aligned visual, textual, and numerical components, designed to enhance chart interpretation and reasoning capabilities in multimodal models.
AI-generated summary
Understanding charts requires models to jointly reason over geometric visual patterns, structured numerical data, and natural language -- a capability where current vision-language models (VLMs) remain limited. We introduce ChartNet, a high-quality, million-scale multimodal dataset designed to advance chart interpretation and reasoning. ChartNet leverages a novel code-guided synthesis pipeline to generate 1.5 million diverse chart samples spanning 24 chart types and 6 plotting libraries. Each sample consists of five aligned components: plotting code, rendered chart image, data table, natural language summary, and question-answering with reasoning, providing fine-grained cross-modal alignment. To capture the full spectrum of chart comprehension, ChartNet additionally includes specialized subsets encompassing human annotated data, real-world data, safety, and grounding. Moreover, a rigorous quality-filtering pipeline ensures visual fidelity, semantic accuracy, and diversity across chart representations. Fine-tuning on ChartNet consistently improves results across benchmarks, demonstrating its utility as large-scale supervision for multimodal models. As the largest open-source dataset of its kind, ChartNet aims to support the development of foundation models with robust and generalizable capabilities for data visualization understanding. The dataset is publicly available at https://huggingface.co/datasets/ibm-granite/ChartNet
View arXiv page View PDF Project page Add to collection
Get this paper in your agent:
hf papers read 2603.27064
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2603.27064 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchpaperarxiv![[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-quantum-N2hdoEfCm2gAozJVRfL5wL.webp)
[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes
TurboQuant was teased recently and tens of billions gone from memory chip market in 48 hours but anyone in this community who read the paper would have seen the problem with the panic immediately. TurboQuant compresses the KV cache down to 3 bits per value from the standard 16 using polar coordinate quantization. But the KV cache is inference memory. Training memory, activations, gradients, optimizer states, is a completely different thing and completely untouched. And majority of HBM demand comes from training. An inference compression paper doesn't move that number. And the commercial inference baseline already runs at 4 to 8 bit precision. The 6x headline is benchmarked against 16 bit full precision. The real marginal gain over what's actually deployed is considerably smaller than that
![[D] Is research in semantic segmentation saturated?](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-microchip-RD7Ub6Tkp8JwbZxSThJdV5.webp)
[D] Is research in semantic segmentation saturated?
Nowadays I dont see a lot of papers addressing 2D semantic segmentation problem statements be it supervised, semi-supervised, domain adaptation. Is the problem statement saturated? Are there any promising research directions in segmentation except open-set segmentation? submitted by /u/Hot_Version_6403 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
![[D] Is research in semantic segmentation saturated?](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-microchip-RD7Ub6Tkp8JwbZxSThJdV5.webp)
[D] Is research in semantic segmentation saturated?
Nowadays I dont see a lot of papers addressing 2D semantic segmentation problem statements be it supervised, semi-supervised, domain adaptation. Is the problem statement saturated? Are there any promising research directions in segmentation except open-set segmentation? submitted by /u/Hot_Version_6403 [link] [comments]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!