World2Rules: A Neuro-Symbolic Framework for Learning World-Governing Safety Rules for Aviation
arXiv:2603.28952v1 Announce Type: new Abstract: Many real-world safety-critical systems are governed by explicit rules that define unsafe world configurations and constrain agent interactions. In practice, these rules are complex and context-dependent, making manual specification incomplete and error-prone. Learning such rules from real-world multimodal data is further challenged by noise, inconsistency, and sparse failure cases. Neural models can extract structure from text and visual data but lack formal guarantees, while symbolic methods provide verifiability yet are brittle when applied directly to imperfect observations. We present World2Rules, a neuro-symbolic framework for learning world-governing safety rules from real-world multimodal aviation data. World2Rules learns from both no
View PDF HTML (experimental)
Abstract:Many real-world safety-critical systems are governed by explicit rules that define unsafe world configurations and constrain agent interactions. In practice, these rules are complex and context-dependent, making manual specification incomplete and error-prone. Learning such rules from real-world multimodal data is further challenged by noise, inconsistency, and sparse failure cases. Neural models can extract structure from text and visual data but lack formal guarantees, while symbolic methods provide verifiability yet are brittle when applied directly to imperfect observations. We present World2Rules, a neuro-symbolic framework for learning world-governing safety rules from real-world multimodal aviation data. World2Rules learns from both nominal operational data and aviation crash and incident reports, treating neural models as proposal mechanisms for candidate symbolic facts and inductive logic programming as a verification layer. The framework employs hierarchical reflective reasoning, enforcing consistency across examples, subsets, and rules to filter unreliable evidence, aggregate only mutually consistent components, and prune unsupported hypotheses. This design limits error propagation from noisy neural extractions and yields compact, interpretable first-order logic rules that characterize unsafe world configurations. We evaluate World2Rules on real-world aviation safety data and show that it learns rules that achieve 23.6% higher F1 score than purely neural and 43.2% higher F1 score than single-pass neuro-symbolic baseline, while remaining suitable for safety-critical reasoning and formal analysis.
Comments: 19 pages, 6 figures
Subjects:
Robotics (cs.RO)
Cite as: arXiv:2603.28952 [cs.RO]
(or arXiv:2603.28952v1 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.28952
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Haichuan Wang [view email] [v1] Mon, 30 Mar 2026 19:48:03 UTC (6,625 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceanalysisHITEK AI launches a bundle of solutions to support compliance with new Dubai Law on building quality & safety - ZAWYA
<a href="https://news.google.com/rss/articles/CBMihAJBVV95cUxOU2JSOVMtbXlXM01fTDBmdWpfZ0JhQkpSSmlJOHBYbmFOSEEtcmFVMjFNX1JHcjgwOEhvNzJ4bHpTRWhfVFJQOHc3S2tERWNJSjg0ZWNjajVmYmVULXBmd2xSV2Z3ZE1qODFnbGhMLWJhNVpQczM5aTNlc1JhNllsc3JTdzhkOVdqRFpEX3BJSjM4TE02N1B5OFlfLVZ4b0RqVENMY0l3TThmZkVqUnJyODJWcWFBcUV4eGIwOXYwdkJjX0h0YVRzRHpFUUZmbVZ1VWNvYmZnUXpCTndLQWpNc2ZIS1VCbDNMcnp6UVVENUpueW5Ga2wwRllFTVJ3anBhb19zVw?oc=5" target="_blank">HITEK AI launches a bundle of solutions to support compliance with new Dubai Law on building quality & safety</a> <font color="#6f6f6f">ZAWYA</font>
Limitations of large language models in clinical problem-solving arising from inflexible reasoning - Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5XaHQ4a25IRlZYZVo1QjVFTFRVdFNJM1piUTZ6bWs1bTRPLVlBLVF3eGhxclI1WWNkRUxsTV9fMFVtM0dkSkhvWDdRUVN3dm4yUUtVbEMweklWUGpCZEQ4?oc=5" target="_blank">Limitations of large language models in clinical problem-solving arising from inflexible reasoning</a> <font color="#6f6f6f">Nature</font>
Exclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models - wsj.com
<a href="https://news.google.com/rss/articles/CBMiuANBVV95cUxPV0Z4UERPZU5fUFY4QXBHeXRud2w2ZWN1WTRaUVFSQ081QnM0YXdKLVFtclZTb2l5SE5QNUxzQTE4eV9HMjhqYkE0RE1HN1hITFVOMFU1c1FSZjcxR3F0Y2w3NHVrVndjUERpS1FTX3JQS0Y2MjF6U1dpY1J0elBFUVN2VEZULXVqUUxGeHVjYUFNUERVVUdlb1F4cGxQQmRKY3poVVlVTVphbDV5SlU0X0ZZVHlUTmFlUWRFNHFfbm9nODVBV3pWZjVGOUZyOFlSZlBlLVNnS2o2eXJxWEtTUEJOaUlnTUlIN29tWTJoWXcwWGxXOXlzU1BLWkNQUzBURWxqRXk5RWVOQ0JDT1FoMVMtUVJfdk9LYlpLb3RSeGpRb2poUlYyZUZRNk54cFBKYTVqcXI1SkdKQlBGRGZKLWcwNDNjTHRaSGtVV1o0dHItTnpvNzJzZ3Qteks2NVFKSndoOVZhdFFHbXdCQXlkbFlSLUJfeFphTmlKN1FOcXVUQVNYcFBaMEk2WU5CSjM3M01lbUZROFlWbHZvaFpkR1I5ZGVlazdxSVJuNmVoN3hQai1GTDBkZw?oc=5" target="_blank">Exclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models</a> <font color="#6f6f6f">wsj.com</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Frontier Research
Science Notes: Identifying ancient games using artificial intelligence - the-past.com
<a href="https://news.google.com/rss/articles/CBMinAFBVV95cUxNeHhITlZPMFFrc1RrclNBY0dOWFFGV1E5MFdINU1nYzF3LVZJWmpPVF9jc3lpUXRra3hlZ0FqWVlTZ19ZNUl4a3FZdElmaGIyMGljMTI3MmxTMnFFZWl3Q1c0MkZVbTN0SXByR1dyNWt6LVBTSEcxYUZ3NEozQ1hhU1hkRGNqeTZfZjdVdTFwX25tTlg2WkJLdEkwYlo?oc=5" target="_blank">Science Notes: Identifying ancient games using artificial intelligence</a> <font color="#6f6f6f">the-past.com</font>
How we enhance cybersecurity defences before the attackers in an AGI world - weforum.org
<a href="https://news.google.com/rss/articles/CBMitgFBVV95cUxOREM2OHhENHhNMXhQVlhEVGVxTkU5VmxDdmFpcnZmOUN3UVFJVUc3Z2V0TThhWGxsU0hDTVpSbzVVQkhMY0lVeEVDT0JwNWR5Q0M2Tm5Jc0VpWUxSWFkxWUllS0taVWQ1RkljaUJPbGJYRzFmRmxib1prODRnVTUtcGhETG4zcmRMdHV3MXJnUFpjZUM5Ny1rR0syN2pWN284QzBCdHMwMDJVbG9RMWhvMUlveW9fdw?oc=5" target="_blank">How we enhance cybersecurity defences before the attackers in an AGI world</a> <font color="#6f6f6f">weforum.org</font>
AI safety push sees Anthropic and OpenAI recruit explosives specialists - Digital Watch Observatory
<a href="https://news.google.com/rss/articles/CBMid0FVX3lxTFB0dThibzVBMkc1Z3lCcVlwdDFvTFBzWjJmWFJXRkw0RmRsZEdxZWw2d3NMd19Pb2ZLeS1RTTYyYnFiRXVEeG1iTUNQTS1TajFyRUotdVZUVEVuazFldm01cjhaOGl4TXplOGZjVXdFZVdTYXVsR1Bv?oc=5" target="_blank">AI safety push sees Anthropic and OpenAI recruit explosives specialists</a> <font color="#6f6f6f">Digital Watch Observatory</font>

RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment
arXiv:2603.29419v1 Announce Type: new Abstract: Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions throu
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!