Concept frustration: Aligning human concepts and machine representations
arXiv:2603.29654v1 Announce Type: cross Abstract: Aligning human-interpretable concepts with the internal representations learned by modern machine learning systems remains a central challenge for interpretable AI. We introduce a geometric framework for comparing supervised human concepts with unsupervised intermediate representations extracted from foundation model embeddings. Motivated by the role of conceptual leaps in scientific discovery, we formalise the notion of concept frustration: a contradiction that arises when an unobserved concept induces relationships between known concepts that cannot be made consistent within an existing ontology. We develop task-aligned similarity measures that detect concept frustration between supervised concept-based models and unsupervised representat
View PDF HTML (experimental)
Abstract:Aligning human-interpretable concepts with the internal representations learned by modern machine learning systems remains a central challenge for interpretable AI. We introduce a geometric framework for comparing supervised human concepts with unsupervised intermediate representations extracted from foundation model embeddings. Motivated by the role of conceptual leaps in scientific discovery, we formalise the notion of concept frustration: a contradiction that arises when an unobserved concept induces relationships between known concepts that cannot be made consistent within an existing ontology. We develop task-aligned similarity measures that detect concept frustration between supervised concept-based models and unsupervised representations derived from foundation models, and show that the phenomenon is detectable in task-aligned geometry while conventional Euclidean comparisons fail. Under a linear-Gaussian generative model we derive a closed-form expression for Bayes-optimal concept-based classifier accuracy, decomposing predictive signal into known-known, known-unknown and unknown-unknown contributions and identifying analytically where frustration affects performance. Experiments on synthetic data and real language and vision tasks demonstrate that frustration can be detected in foundation model representations and that incorporating a frustrating concept into an interpretable model reorganises the geometry of learned concept representations, to better align human and machine reasoning. These results suggest a principled framework for diagnosing incomplete concept ontologies and aligning human and machine conceptual reasoning, with implications for the development and validation of safe interpretable AI for high-risk applications.
Comments: 34 pages, 7 figures
Subjects:
Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
Cite as: arXiv:2603.29654 [cs.LG]
(or arXiv:2603.29654v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2603.29654
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Enrico Parisini [view email] [v1] Tue, 31 Mar 2026 12:17:21 UTC (1,312 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelfoundation modelannounce
PACELC Theorem in System Design
The PACELC Theorem represents a foundational advancement in understanding the inherent trade-offs that define modern distributed systems . Developed as a direct extension of the CAP Theorem , it provides architects and engineers with a more complete framework for reasoning about system behavior under both failure conditions and normal operations. Where earlier models focused narrowly on rare network failures, the PACELC Theorem acknowledges that consistency , availability , and latency constantly interact in real production environments. The Evolution from CAP to PACELC The CAP Theorem established that in the presence of a network partition , a distributed system can guarantee only two out of three properties: Consistency , Availability , and Partition Tolerance . This insight proved inval
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!