Top 10 robotics developments of March 2026 - The Robot Report
Top 10 robotics developments of March 2026 The Robot Report
Could not retrieve the full article text.
Read on Google News - AI robotics →Google News - AI robotics
https://news.google.com/rss/articles/CBMie0FVX3lxTE5IYU9wVnhZaUMxQXVGYUZXdGRETGhGOGRCN3F4bWFUUHBTTFQ3d0MwQ1RxLUxGWlpRZWg5LTJ6ODRPMjNxa3p1clBieGpTdVhPM0dWYW1JWWV5M0dsVmx3ZG9yQjBLT3lWRlN1S2lVRHVKMk9aeW5TdElBUQ?oc=5Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
report
Trivial Vocabulary Bans Improve LLM Reasoning More Than Deep Linguistic Constraints
arXiv:2604.02699v1 Announce Type: new Abstract: A previous study reported that E-Prime (English without the verb "to be") selectively altered reasoning in language models, with cross-model correlations suggesting a structural signature tied to which vocabulary was removed. I designed a replication with active controls to test the proposed mechanism: cognitive restructuring through specific vocabulary-cognition mappings. The experiment tested five conditions (unconstrained control, E-Prime, No-Have, elaborated metacognitive prompt, neutral filler-word ban) across six models and seven reasoning tasks (N=15,600 trials, 11,919 after compliance filtering). Every prediction from the cognitive restructuring hypothesis was disconfirmed. All four treatments outperformed the control (83.0%), includi

Beyond Resolution Rates: Behavioral Drivers of Coding Agent Success and Failure
arXiv:2604.02547v1 Announce Type: new Abstract: Coding agents represent a new paradigm in automated software engineering, combining the reasoning capabilities of Large Language Models (LLMs) with tool-augmented interaction loops. However, coding agents still have severe limitations. Top-ranked LLM-based coding agents still fail on over 20% of benchmarked problems. Yet, we lack a systematic understanding of why (i.e., the causes) agents fail, and how failure unfolds behaviorally. We present a large-scale empirical study analyzing 9,374 trajectories from 19 agents (8 coding agent frameworks, 14 LLMs) on 500 tasks. We organize our analysis around three research questions. First, we investigate why agents fail on specific tasks and find that patch complexity alone does not explain difficulty:

AI Disclosure with DAISY
arXiv:2604.02760v1 Announce Type: new Abstract: The use of AI tools in research is becoming routine, alongside growing consensus that such use should be transparently disclosed. However, AI disclosure statements remain rare and inconsistent, with policies offering limited guidance and authors facing social, cognitive, and emotional barriers when reporting AI use. To explore how structured disclosure shapes what authors report and how they experience disclosure, we present DAISY (Disclosure of AI-uSe in Your Research), a form-based tool for generating AI disclosure statements. DAISY was developed from literature-derived requirements and co-design (N =11), and deployed in a user study with authors (N=31). DAISY-supported disclosures met more completeness criteria, offering clearer breakdowns
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!