Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessWhatsApp notifies hundreds of users who installed a fake app that was actually government spywareTechCrunchAI-Generated Go Serialization: Zero Boilerplate, Maximum SpeedDEV CommunityOpenAI & Anthropic Prove the AI Revolution is Just Starting - Zacks Investment ResearchGoogle News: OpenAII Built a Social Post Engine to Escape the Canva-Export-Schedule LoopDEV CommunityWhen Chrome Ate My RAM: Designing a Pressure-Aware Tab Orchestrator with RustDEV CommunityWhy Your System Fails on the Most Predictable Day of the YearDEV CommunityDeployment Hooks Explained: Running Custom Scripts During Every DeployDEV CommunityI built a knowledge archive for AI agents — here's how the hash chain and trust engine workDEV CommunitySwartz Mind/Brain Lecture Explores How AI Could Decode and Shape Human Vision - SBU NewsGoogle News: AIGoogle Drive can now detect ransomware and roll back your filesTechSpotOpenAI's $122B in funding comes at a perilous moment - theregister.comGoogle News: OpenAIAI models will secretly scheme to protect other AI models from being shut down, researchers find - FortuneGoogle News: AI SafetyBlack Hat USADark ReadingBlack Hat AsiaAI BusinessWhatsApp notifies hundreds of users who installed a fake app that was actually government spywareTechCrunchAI-Generated Go Serialization: Zero Boilerplate, Maximum SpeedDEV CommunityOpenAI & Anthropic Prove the AI Revolution is Just Starting - Zacks Investment ResearchGoogle News: OpenAII Built a Social Post Engine to Escape the Canva-Export-Schedule LoopDEV CommunityWhen Chrome Ate My RAM: Designing a Pressure-Aware Tab Orchestrator with RustDEV CommunityWhy Your System Fails on the Most Predictable Day of the YearDEV CommunityDeployment Hooks Explained: Running Custom Scripts During Every DeployDEV CommunityI built a knowledge archive for AI agents — here's how the hash chain and trust engine workDEV CommunitySwartz Mind/Brain Lecture Explores How AI Could Decode and Shape Human Vision - SBU NewsGoogle News: AIGoogle Drive can now detect ransomware and roll back your filesTechSpotOpenAI's $122B in funding comes at a perilous moment - theregister.comGoogle News: OpenAIAI models will secretly scheme to protect other AI models from being shut down, researchers find - FortuneGoogle News: AI Safety

Structural Compactness as a Complementary Criterion for Explanation Quality

ArXiv CS.AIby Mohammad Mahdi Mesgari, Jackie Ma, Wojciech Samek, Sebastian Lapuschkin, Leander WeberApril 1, 20261 min read0 views
Source Quiz

arXiv:2603.29491v1 Announce Type: new Abstract: In the evaluation of attribution quality, the quantitative assessment of explanation legibility is particularly difficult, as it is influenced by varying shapes and internal organization of attributions not captured by simple statistics. To address this issue, we introduce Minimum Spanning Tree Compactness (MST-C), a graph-based structural metric that captures higher-order geometric properties of attributions, such as spread and cohesion. These components are combined into a single score that evaluates compactness, favoring attributions with salient points spread across a small area and spatially organized into few but cohesive clusters. We show that MST-C reliably distinguishes between explanation methods, exposes fundamental structural diff

View PDF HTML (experimental)

Abstract:In the evaluation of attribution quality, the quantitative assessment of explanation legibility is particularly difficult, as it is influenced by varying shapes and internal organization of attributions not captured by simple statistics. To address this issue, we introduce Minimum Spanning Tree Compactness (MST-C), a graph-based structural metric that captures higher-order geometric properties of attributions, such as spread and cohesion. These components are combined into a single score that evaluates compactness, favoring attributions with salient points spread across a small area and spatially organized into few but cohesive clusters. We show that MST-C reliably distinguishes between explanation methods, exposes fundamental structural differences between models, and provides a robust, self-contained diagnostic for explanation compactness that complements existing notions of attribution complexity.

Subjects:

Artificial Intelligence (cs.AI)

Cite as: arXiv:2603.29491 [cs.AI]

(or arXiv:2603.29491v1 [cs.AI] for this version)

https://doi.org/10.48550/arXiv.2603.29491

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Mohammad Mahdi Mesgari [view email] [v1] Tue, 31 Mar 2026 09:36:52 UTC (23,903 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelannouncevaluation

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Structural …modelannouncevaluationcomponentarxivArXiv CS.AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 190 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models