Exploring and Analyzing the Effect of Avatar's Visual Style on Anxiety of English as Second Language (ESL) Speakers
arXiv:2311.05126v3 Announce Type: replace Abstract: Virtual avatars offer new opportunities to reshape communication experiences beyond traditional live video. However, it remains unclear how avatar representations influence communication anxiety for English as a Second Language (ESL) speakers, and why such effects emerge. To take a first step to address this, we conducted a controlled laboratory study in which Mandarin-speaking ESL participants engaged in one-on-one conversations under three representation conditions: live video, stylized avatars, and realistic avatars. We assessed anxiety using both self-reported measures and physiological signals (EDA, ECG, PPG). Our results show that avatar style plays a critical role in shaping communication anxiety. While live video remained a strong
View PDF HTML (experimental)
Abstract:Virtual avatars offer new opportunities to reshape communication experiences beyond traditional live video. However, it remains unclear how avatar representations influence communication anxiety for English as a Second Language (ESL) speakers, and why such effects emerge. To take a first step to address this, we conducted a controlled laboratory study in which Mandarin-speaking ESL participants engaged in one-on-one conversations under three representation conditions: live video, stylized avatars, and realistic avatars. We assessed anxiety using both self-reported measures and physiological signals (EDA, ECG, PPG). Our results show that avatar style plays a critical role in shaping communication anxiety. While live video remained a strong baseline with low subjective anxiety, stylized avatars achieved comparable-and in some cases lower-physiological anxiety levels, whereas realistic avatars elicited higher anxiety. Beyond these effects, our findings reveal three underlying mechanisms that explain how avatar representations shape ESL communication anxiety: (1) facial expressiveness; (2) perceived feedback and fear of negative evaluation; and (3) contextual appropriateness. This work provides actionable design implications for developing avatar-mediated communication systems that support emotionally sustainable cross-linguistic interaction.
Comments: 10 pages, 5 figures, 2 tables
Subjects:
Human-Computer Interaction (cs.HC)
Cite as: arXiv:2311.05126 [cs.HC]
(or arXiv:2311.05126v3 [cs.HC] for this version)
https://doi.org/10.48550/arXiv.2311.05126
arXiv-issued DOI via DataCite
Submission history
From: Tianqi Liu [view email] [v1] Thu, 9 Nov 2023 03:43:07 UTC (6,950 KB) [v2] Fri, 30 Jan 2026 16:37:52 UTC (3,784 KB) [v3] Tue, 31 Mar 2026 18:50:47 UTC (1,396 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcevaluationreport
Choosing an AI Agent Orchestrator in 2026: A Practical Comparison
Running one AI coding agent is easy. Running three in parallel on the same codebase is where things get interesting — and where you need to make a tooling choice. There's no "best" orchestrator. There's the right one for your workflow. Here's an honest comparison of five approaches, with the tradeoffs I've seen after months of running multi-agent setups. The Options 1. Raw tmux Scripts What it is: Shell scripts that launch agents in tmux panes. DIY orchestration. Pros: Zero dependencies beyond tmux Full control over every detail No abstractions to fight You already know how it works Cons: No state management — you track everything manually No message routing between agents No test gating — agents declare "done" without verification Breaks when agents crash or hit context limits You become
![[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-earth-satellite-QfbitDhCB2KjTsjtXRYcf9.webp)
[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry
Hi r/MachineLearning , I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed LLMs. The core idea: instead of monitoring input embeddings (which is what existing tools do), we monitor the statistical manifold of the model’s output distributions using Fisher-Rao geodesic distance. We then run adaptive CUSUM (Page-Hinkley) on the resulting z-score stream to catch slow drift that per-request spike detection misses entirely. The methodology is grounded in published work on information geometry (Figshare, DOIs available). We’ve validated the signal on real OpenAI API logprobs, CUSUM caught gradual domain drift in 7 steps with zero false alarms during warmup, while spike detection missed it entirely. If anyone with cs.LG endorsement is
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
![[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-earth-satellite-QfbitDhCB2KjTsjtXRYcf9.webp)
[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry
Hi r/MachineLearning , I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed LLMs. The core idea: instead of monitoring input embeddings (which is what existing tools do), we monitor the statistical manifold of the model’s output distributions using Fisher-Rao geodesic distance. We then run adaptive CUSUM (Page-Hinkley) on the resulting z-score stream to catch slow drift that per-request spike detection misses entirely. The methodology is grounded in published work on information geometry (Figshare, DOIs available). We’ve validated the signal on real OpenAI API logprobs, CUSUM caught gradual domain drift in 7 steps with zero false alarms during warmup, while spike detection missed it entirely. If anyone with cs.LG endorsement is
![[D] KDD Review Discussion](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-wave-pattern-4YWNKzoeu65vYpqRKWMiWf.webp)
[D] KDD Review Discussion
KDD 2026 (Feb Cycle) reviews will release today (4-April AoE), This thread is open to discuss about reviews and importantly celebrate successful reviews. Let us all remember that review system is noisy and we all suffer from it and this doesn't define our research impact. Let's all prioritise reviews which enhance our papers. Feel free to discuss your experiences submitted by /u/BomsDrag [link] [comments]





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!