XR is XR: Rethinking MR and XR as Neutral Umbrella Terms
arXiv:2603.29939v1 Announce Type: new Abstract: The term XR is currently widely used as an expression encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). However, there is no clear consensus regarding its origin or meaning. XR is sometimes explained as an abbreviation for Extended Reality, but multiple interpretations exist regarding its etymology and formation process. This paper organizes the historical formation of terminology related to VR, AR, MR, and XR, and reexamines the context in which the term XR emerged and how it has spread. In particular, by presenting a timeline that distinguishes between the coinage of terms and the drivers of their adoption, we suggest that XR, as an umbrella term, functions not as an abbreviation of Extended Reality, but rat
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Connected Papers Toggle
Litmaps Toggle
scite.ai Toggle
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
Links to Code Toggle
DagsHub Toggle
GotitPub Toggle
Huggingface Toggle
Links to Code Toggle
ScienceCast Toggle
Demos
Demos
Replicate Toggle
Spaces Toggle
Spaces Toggle
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Core recommender toggle
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcepaperarxiv
“Following the incentives”
A few years ago I listened to a fascinating podcast interview featuring former Democratic presidential candidates Andrew Yang and Marianne Williamson. They agreed that politics is a mess and politicians are constantly doing bad things that harm the people they are supposed to serve. But they couldn’t agree on how bad that made the politicians as people . Yang wanted to view the politicians as normal people responding to bad incentives, but Williamson wanted to call them evil for failing to exercise courage in the face of these bad incentives. Morally, the notion that you can’t blame people when they are following incentives is akin to the “just following orders” excuse that Nazis tried to use at the Nuremberg trials. But what’s the alternative? In practice, we can’t and don’t expect people
The bar is lower than you think
TL;DR: The efficient market hypothesis is a lie, there are no adults, you don't have to be as cool as the Very Cool People to contribute something, your comparative advantage tends to feel like just doing the obvious thing, and low hanging fruit is everywhere if you pay attention. The Very Cool People are anyways not so impossible to become; and perhaps most coolness is gated behind a self belief of having nothing to add. So put more out into the world, worry less about whether people already know or find it boring. At worst you'll be slightly annoying. How can you know, if you haven't even tried? Recently I've been commenting more on LessWrong [1] . This place is somehow the best [2] forum for sane reasoned discussion on the internet besides small academic-gated communities. A lot of post
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Development and multi-center evaluation of domain-adapted speech recognition for human-AI teaming in real-world gastrointestinal endoscopy
Automatic speech recognition (ASR) is a critical interface for human-AI interaction in gastrointestinal endoscopy, yet its reliability in real-world clinical settings is limited by domain-specific terminology and complex acoustic conditions. Here, we present EndoASR, a domain-adapted ASR system designed for real-time deployment in endoscopic workflows. We develop a two-stage adaptation strategy based on synthetic endoscopy reports, targeting domain-specific language modeling and noise robustness. In retrospective evaluation across six endoscopists, EndoASR substantially improves both transcrip — Ruijie Yang, Yan Zhu, Peiyao Fu

Memory in the LLM Era: Modular Architectures and Strategies in a Unified Framework
Memory emerges as the core module in the large language model (LLM)-based agents for long-horizon complex tasks (e.g., multi-turn dialogue, game playing, scientific discovery), where memory can enable knowledge accumulation, iterative reasoning and self-evolution. A number of memory methods have been proposed in the literature. However, these methods have not been systematically and comprehensively compared under the same experimental settings. In this paper, we first summarize a unified framework that incorporates all the existing agent memory methods from a high-level perspective. We then ex — Yanchen Wu, Tenghui Lin, Yingli Zhou

Human-Guided Reasoning with Large Language Models for Vietnamese Speech Emotion Recognition
Vietnamese Speech Emotion Recognition (SER) remains challenging due to ambiguous acoustic patterns and the lack of reliable annotated data, especially in real-world conditions where emotional boundaries are not clearly separable. To address this problem, this paper proposes a human-machine collaborative framework that integrates human knowledge into the learning process rather than relying solely on data-driven models. The proposed framework is centered around LLM-based reasoning, where acoustic feature-based models are used to provide auxiliary signals such as confidence and feature-level evi — Truc Nguyen, Then Tran, Binh Truong



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!