Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessDante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.Reddit r/LocalLLaMAMastering AI Careers in 90 Days: Transformative OpportunitiesMedium AIPSSU: The Minimal Architecture for Persistent AIDev.to AIComplete Guide to MCP (Model Context Protocol) in 2026 — Architecture, Implementation, and Enterprise RoadmapDev.to AIFrom Answers to ProcessesMedium AIUnlocking Document Intelligence: A Comprehensive Guide to Multimodal ExtractionMedium AII Studied 40 Viral AI Reels to Find What Actually Works (With Real Numbers)Dev.to AIFive Questions Every AI Investor Should Ask About Intelligence ArchitectureDev.to AIОдин промпт заменил мне 2 часа работы в деньDev.to AIThe 12 AI Tools Actually Worth Using in ClassroomsDev.to AICode Ignition: How AI Sparks Innovation in Software DevelopmentDev.to AIThe Silent Freeze: When Your Model Runs Out of Credits Mid-ConversationDev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessDante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.Reddit r/LocalLLaMAMastering AI Careers in 90 Days: Transformative OpportunitiesMedium AIPSSU: The Minimal Architecture for Persistent AIDev.to AIComplete Guide to MCP (Model Context Protocol) in 2026 — Architecture, Implementation, and Enterprise RoadmapDev.to AIFrom Answers to ProcessesMedium AIUnlocking Document Intelligence: A Comprehensive Guide to Multimodal ExtractionMedium AII Studied 40 Viral AI Reels to Find What Actually Works (With Real Numbers)Dev.to AIFive Questions Every AI Investor Should Ask About Intelligence ArchitectureDev.to AIОдин промпт заменил мне 2 часа работы в деньDev.to AIThe 12 AI Tools Actually Worth Using in ClassroomsDev.to AICode Ignition: How AI Sparks Innovation in Software DevelopmentDev.to AIThe Silent Freeze: When Your Model Runs Out of Credits Mid-ConversationDev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

ChartNet: A Million-Scale, High-Quality Multimodal Dataset for Robust Chart Understanding

HuggingFace Papersby Jovana Kondic ,March 28, 20262 min read1 views
Source Quiz

ChartNet is a large-scale multimodal dataset featuring 1.5 million chart samples with aligned visual, textual, and numerical components, designed to enhance chart interpretation and reasoning capabilities in multimodal models. (7 upvotes on HuggingFace)

Published on Mar 28

Authors:

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

Abstract

ChartNet is a large-scale multimodal dataset featuring 1.5 million chart samples with aligned visual, textual, and numerical components, designed to enhance chart interpretation and reasoning capabilities in multimodal models.

AI-generated summary

Understanding charts requires models to jointly reason over geometric visual patterns, structured numerical data, and natural language -- a capability where current vision-language models (VLMs) remain limited. We introduce ChartNet, a high-quality, million-scale multimodal dataset designed to advance chart interpretation and reasoning. ChartNet leverages a novel code-guided synthesis pipeline to generate 1.5 million diverse chart samples spanning 24 chart types and 6 plotting libraries. Each sample consists of five aligned components: plotting code, rendered chart image, data table, natural language summary, and question-answering with reasoning, providing fine-grained cross-modal alignment. To capture the full spectrum of chart comprehension, ChartNet additionally includes specialized subsets encompassing human annotated data, real-world data, safety, and grounding. Moreover, a rigorous quality-filtering pipeline ensures visual fidelity, semantic accuracy, and diversity across chart representations. Fine-tuning on ChartNet consistently improves results across benchmarks, demonstrating its utility as large-scale supervision for multimodal models. As the largest open-source dataset of its kind, ChartNet aims to support the development of foundation models with robust and generalizable capabilities for data visualization understanding. The dataset is publicly available at https://huggingface.co/datasets/ibm-granite/ChartNet

View arXiv page View PDF Project page Add to collection

Get this paper in your agent:

hf papers read 2603.27064

Don't have the latest CLI?

curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.27064 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

researchpaperarxiv

Knowledge Map

Knowledge Map
TopicsEntitiesSource
ChartNet: A…researchpaperarxivmultimodal …chart inter…vision-lang…HuggingFace…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 157 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!