Conversational Successes and Breakdowns in Everyday Smart Glasses Use
arXiv:2602.22340v2 Announce Type: replace Abstract: Non-Display Smart Glasses hold the potential to support everyday activities by combining continuous environmental sensing with voice-only interaction powered by large language models (LLMs). Understanding how conversational successes and breakdowns arise in everyday contexts can better inform the design of future voice-only interfaces. To investigate this, we conducted a month-long collaborative autoethnography (n=2) to identify patterns of successes and breakdowns when using such devices. We then compare these patterns with prior findings on voice-only interactions to highlight the unique affordances and opportunities offered by non-display smart glasses.
View PDF HTML (experimental)
Abstract:Non-Display Smart Glasses hold the potential to support everyday activities by combining continuous environmental sensing with voice-only interaction powered by large language models (LLMs). Understanding how conversational successes and breakdowns arise in everyday contexts can better inform the design of future voice-only interfaces. To investigate this, we conducted a month-long collaborative autoethnography (n=2) to identify patterns of successes and breakdowns when using such devices. We then compare these patterns with prior findings on voice-only interactions to highlight the unique affordances and opportunities offered by non-display smart glasses.
Subjects:
Human-Computer Interaction (cs.HC)
Cite as: arXiv:2602.22340 [cs.HC]
(or arXiv:2602.22340v2 [cs.HC] for this version)
https://doi.org/10.48550/arXiv.2602.22340
arXiv-issued DOI via DataCite
Submission history
From: Xiuqi Zhu [view email] [v1] Wed, 25 Feb 2026 19:05:27 UTC (20,112 KB) [v2] Thu, 2 Apr 2026 16:33:33 UTC (20,112 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modelannounce
30 Days of Building a Small Language Model — Day 1: Neural Networks
Welcome to day one. Before I introduce tokenizers, transformers, or training loops, we start where almost all modern machine learning starts: the neural network. Think of the first day as laying down the foundation you will reuse for the next twenty-nine days. If you have ever felt that neural networks sound like a black box, this post is for you. We will use a simple picture is this a dog or a cat? and walk through what actually happens inside the model, in plain language. What is a neural network? A neural network is made of layers. Each layer has many small units. Data flows in one direction: each unit takes numbers from the previous layer, updates them, and sends new numbers forward. During training, the network adjusts itself so its outputs get closer to the correct answers on example

Monarch v3: 78% Faster LLM Inference with NES-Inspired KV Paging
TL;DR: We implemented NES-inspired memory paging for transformers. On a 1.1B parameter model, inference is now 78% faster (17.01 → 30.42 tok/sec) with nearly zero VRAM overhead. The algorithm is open source, fully benchmarked, and ready to use. The Problem KV cache grows linearly with sequence length. By 4K tokens, most of it sits unused—recent tokens matter far more than old ones, yet we keep everything in VRAM at full precision. Standard approaches (quantization, pruning, distillation) are invasive. We wanted something simpler: just move the old stuff out of the way. The Solution: NES-Inspired Paging Think of it like a Game Boy's memory banking system. The cache is split into a hot region (recent tokens, full precision) and a cold region (older tokens, compressed). As new tokens arrive,
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

30 Days of Building a Small Language Model — Day 1: Neural Networks
Welcome to day one. Before I introduce tokenizers, transformers, or training loops, we start where almost all modern machine learning starts: the neural network. Think of the first day as laying down the foundation you will reuse for the next twenty-nine days. If you have ever felt that neural networks sound like a black box, this post is for you. We will use a simple picture is this a dog or a cat? and walk through what actually happens inside the model, in plain language. What is a neural network? A neural network is made of layers. Each layer has many small units. Data flows in one direction: each unit takes numbers from the previous layer, updates them, and sends new numbers forward. During training, the network adjusts itself so its outputs get closer to the correct answers on example




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!