Evaluation of Generative Models for Emotional 3D Animation Generation in VR
arXiv:2512.16081v2 Announce Type: replace-cross Abstract: Social interactions incorporate nonverbal signals to convey emotions alongside speech, including facial expressions and body gestures. Generative models have demonstrated promising results in creating full-body nonverbal animations synchronized with speech; however, evaluations using statistical metrics in 2D settings fail to fully capture user-perceived emotions, limiting our understanding of model effectiveness. To address this, we evaluate emotional 3D animation generative models within a Virtual Reality (VR) environment, emphasizing user-centric metrics emotional arousal realism, naturalness, enjoyment, diversity, and interaction quality in a real-time human-agent interaction scenario. Through a user study (N=48), we examine per
View PDF HTML (experimental)
Abstract:Social interactions incorporate nonverbal signals to convey emotions alongside speech, including facial expressions and body gestures. Generative models have demonstrated promising results in creating full-body nonverbal animations synchronized with speech; however, evaluations using statistical metrics in 2D settings fail to fully capture user-perceived emotions, limiting our understanding of model effectiveness. To address this, we evaluate emotional 3D animation generative models within a Virtual Reality (VR) environment, emphasizing user-centric metrics emotional arousal realism, naturalness, enjoyment, diversity, and interaction quality in a real-time human-agent interaction scenario. Through a user study (N=48), we examine perceived emotional quality for three state of the art speech-driven 3D animation methods across two emotions happiness (high arousal) and neutral (mid arousal). Additionally, we compare these generative models against real human expressions obtained via a reconstruction-based method to assess both their strengths and limitations and how closely they replicate real human facial and body expressions. Our results demonstrate that methods explicitly modeling emotions lead to higher recognition accuracy compared to those focusing solely on speech-driven synchrony. Users rated the realism and naturalness of happy animations significantly higher than those of neutral animations, highlighting the limitations of current generative models in handling subtle emotional states. Generative models underperformed compared to reconstruction-based methods in facial expression quality, and all methods received relatively low ratings for animation enjoyment and interaction quality, emphasizing the importance of incorporating user-centric evaluations into generative model development. Finally, participants positively recognized animation diversity across all generative models.
Comments: 20 pages, 5 figures. Webpage: this https URL
Subjects:
Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA)
Cite as: arXiv:2512.16081 [cs.HC]
(or arXiv:2512.16081v2 [cs.HC] for this version)
https://doi.org/10.48550/arXiv.2512.16081
arXiv-issued DOI via DataCite
Related DOI:
https://doi.org/10.3389/fcomp.2025.1598099
DOI(s) linking to related resources
Submission history
From: Kiran Chhatre [view email] [v1] Thu, 18 Dec 2025 01:56:22 UTC (32,509 KB) [v2] Mon, 30 Mar 2026 23:16:13 UTC (32,450 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncevaluationNew Gemini 3.5 Stealth Model & Gemini 3.1 Flash "White Water" - Geeky Gadgets
<a href="https://news.google.com/rss/articles/CBMiZkFVX3lxTE0xSTVrSl9vREhfYThMSm83R0d0MHFkdnNCemlBM3hJYXR5MEpXVnY5WVdHend3c1VkM2M4cFZta2dZay1VWnRhY2hReVZGY1UxQ1lReVpjMjZ1MFZYM0NqTWtaV25TUQ?oc=5" target="_blank">New Gemini 3.5 Stealth Model & Gemini 3.1 Flash "White Water"</a> <font color="#6f6f6f">Geeky Gadgets</font>
Anthropic says Claude can now use your computer to finish tasks for you in AI agent push - MSN
<a href="https://news.google.com/rss/articles/CBMiqAJBVV95cUxQV2FFZnc3bGFNSXBSVC00TVdRd3V4RFBZai0tUjlXZTFZZzlsQVFuM1VrcmhHdzhJblJrM3dJSWJxeU9iTzFmSHBDR1dwajBQSkktRFlxZWdjV3JKTlpXYzk2bW9mb0V4QTFGZi1JSWdrZ1FfRTlxNkZkN0tiSTk3SEQ1UzBkTE91MXJZa1pjTUdwWnNmR19la1JjR2hYeDNFZk8tTWwxM2VWQVlYUVRpSnFTdUNSSDJ0WkRJNHpOWTZsTDVvUnA4cHFEWHpES1lEUTh0T3puWWZaN1NESmFreUg0TlRPc080ZzlFS2NhVzFBX0NvdG44ajdoc1R3MTM1OGV1NV9hbi1WSzR5VlprbTd1d0RMX1h3bzd1ZG9ReHYtc3pyQmJUSg?oc=5" target="_blank">Anthropic says Claude can now use your computer to finish tasks for you in AI agent push</a> <font color="#6f6f6f">MSN</font>

Webhook Best Practices: Retry Logic, Idempotency, and Error Handling
<h1> Webhook Best Practices: Retry Logic, Idempotency, and Error Handling </h1> <p>Most webhook integrations fail silently. A handler returns 500, the provider retries a few times, then stops. Your system never processed the event and no one knows.</p> <p>Webhooks are not guaranteed delivery by default. How reliably your integration works depends almost entirely on how you write the receiver. This guide covers the patterns that make webhook handlers production-grade: proper retry handling, idempotency, error response codes, and queue-based processing.</p> <h2> Understand the Delivery Model </h2> <p>Before building handlers, understand what you are dealing with:</p> <ul> <li>Providers send webhook events as HTTP POST requests</li> <li>They expect a 2xx response within a timeout (typically 5
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
UVA researchers use AI to speed up drug development - WVIR
<a href="https://news.google.com/rss/articles/CBMiiwFBVV95cUxOaHF0M0pSdUx0OUp4UHd4a0VnVllZVWtFZ0F6U0I2azlPejJLLTduTmdtZWFCYWhLRWRQSjRXTkxaWlJiV1ozc1JERnFqemtLczJmOEh3d0luZTlNdFNNcDlRdjdobU50RDd0Tk5NRkdqSU5HbGo0RVEzSTdoVThFeWxhRHFzUWpaX3FF0gGfAUFVX3lxTE1jbWpYOWZEWGtJd25vRGg0Nll3VFRzNGdoT01YYmt4YVZ1RHV5dVB3TVN0UVdGVDNHbDFKZnBlODlyQkZSWFFjZ2NDRWVvS05kXzJPOVNpT0xtZ3g5UjM2MF8wWmhPdGkwU1hGYTJzOTlreTJjNzVlaFdHVm9mNUxjOXdQVmR3cVE1ZlhrRmpMbWZpU1FFUEx0UVZXVlBBTQ?oc=5" target="_blank">UVA researchers use AI to speed up drug development</a> <font color="#6f6f6f">WVIR</font>
Illinois Tech computer science researcher honored by IEEE Chicago Section - EurekAlert!
<a href="https://news.google.com/rss/articles/CBMiXEFVX3lxTE13OVpWMEk1Z3hlMkR2bHNBQ2dkazFwb3VqN3hCa29GWGJvSVlPa00zd2xUakRmYXFqQmc5OWU0eGl4a21FMDAwWUN2Q3p0M3FrbXBkNV8zN0cxaG1s?oc=5" target="_blank">Illinois Tech computer science researcher honored by IEEE Chicago Section</a> <font color="#6f6f6f">EurekAlert!</font>
AI maps science papers to predict research trends two to three years ahead - Tech Xplore
<a href="https://news.google.com/rss/articles/CBMie0FVX3lxTE5aTkZYTWdaRDZwTXNRMldpMG1WZ1YzWDZTOHN5M183Z3A1ZTFYbnhEWTdPRmpvZnZFU0xodlRsNWxFaGxTcEpwalhJNmJpQWE5VjhaRS1tOXJIeTc5Z0JNblJ3dFd4WjRYZGJOX0NrWGt6ZmZJVTBpRm5wWQ?oc=5" target="_blank">AI maps science papers to predict research trends two to three years ahead</a> <font color="#6f6f6f">Tech Xplore</font>


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!