Dual Perspectives in Emotion Attribution: A Generator-Interpreter Framework for Cross-Cultural Analysis of Emotion in LLMs
arXiv:2603.29077v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used in cross-cultural systems to understand and adapt to human emotions, which are shaped by cultural norms of expression and interpretation. However, prior work on emotion attribution has focused mainly on interpretation, overlooking the cultural background of emotion generators. This assumption of universality neglects variation in how emotions are expressed and perceived across nations. To address this gap, we propose a Generator-Interpreter framework that captures dual perspectives of emotion attribution by considering both expression and interpretation. We systematically evaluate six LLMs on an emotion attribution task using data from 15 countries. Our analysis reveals that performance varia
View PDF HTML (experimental)
Abstract:Large language models (LLMs) are increasingly used in cross-cultural systems to understand and adapt to human emotions, which are shaped by cultural norms of expression and interpretation. However, prior work on emotion attribution has focused mainly on interpretation, overlooking the cultural background of emotion generators. This assumption of universality neglects variation in how emotions are expressed and perceived across nations. To address this gap, we propose a Generator-Interpreter framework that captures dual perspectives of emotion attribution by considering both expression and interpretation. We systematically evaluate six LLMs on an emotion attribution task using data from 15 countries. Our analysis reveals that performance variations depend on the emotion type and cultural context. Generator-interpreter alignment effects are present; the generator's country of origin has a stronger impact on performance. We call for culturally sensitive emotion modeling in LLM-based systems to improve robustness and fairness in emotion understanding across diverse cultural contexts.
Subjects:
Computation and Language (cs.CL)
Cite as: arXiv:2603.29077 [cs.CL]
(or arXiv:2603.29077v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2603.29077
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Aizirek Turdubaeva [view email] [v1] Mon, 30 Mar 2026 23:32:17 UTC (1,539 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modelannounceTop AI Agent Frameworks in 2026: A Production-Ready Comparison
We tested 8 AI agent frameworks in production across healthcare, logistics, and fintech. Here’s what actually works — and what breaks when real users show up. Six months ago, picking an AI agent framework meant choosing between LangGraph, CrewAI, and AutoGen. That was the entire conversation. Now every major AI lab ships its own agent SDK. OpenAI, Anthropic, and Google all launched agent development kits in 2026. Microsoft rebuilt AutoGen from scratch. LangGraph hit 126,000 GitHub stars. CrewAI raised funding and shipped enterprise features. The result: 120+ agentic AI tools across 11 categories, and every CTO we talk to asks the same question — which one should we actually build on? Here’s the problem with most framework comparisons: they benchmark toy examples. They test “build a researc

Your AI Writes Code. Who Fixes the Build?
<p>Every AI coding tool in 2026 can write code. Some of them write great code. But here's the question nobody asks during the demo: <strong>what happens when the build fails?</strong></p> <p>Because the build will fail. It always does.</p> <h2> The Invisible 40% </h2> <p>When you watch a demo of an AI coding tool, you see the impressive part: the AI generates a full component, a complete function, an entire page. It looks magical.</p> <p>What you don't see is what happens next:</p> <ul> <li>The import path is wrong because the AI didn't read the project's module structure</li> <li>There's a type mismatch because the API response shape changed last week</li> <li>A dependency is missing because the AI assumed it was already installed</li> <li>A CSS class doesn't exist because the AI used Tai

Claude AI Source Code Leaked: Individual Rewriting in Rust to Address Security Concerns
<h2> Introduction & Background </h2> <p>In a turn of events that feels ripped from the pages of a tech thriller, the <strong>source code of Claude</strong>, Anthropic’s advanced AI model, has been <strong>accidentally leaked</strong>. Compounding the intrigue, an individual has taken it upon themselves to <strong>rewrite the codebase in Rust</strong>, a programming language celebrated for its <strong>memory safety</strong> and <strong>performance</strong>. This incident isn’t just a footnote in AI history—it’s a glaring spotlight on the <strong>systemic vulnerabilities</strong> in AI security and intellectual property protection. The stakes? Nothing short of the <strong>future of AI development</strong>, the <strong>competitive landscape of tech giants</strong>, and the <strong>public’
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Sources: Amazon's "sponsored prompts" for its Rufus AI assistant are yielding significantly lower traffic than traditional ads, but are more cost-effective (Catherine Perloff/The Information)
Catherine Perloff / The Information : Sources: Amazon's “sponsored prompts” for its Rufus AI assistant are yielding significantly lower traffic than traditional ads, but are more cost-effective — The early batch of ads running on OpenAI's ChatGPT has drawn a lot of attention in recent weeks.

Waaseyaa governance series
<p>Ahnii!</p> <p>This series covers how <a href="https://github.com/waaseyaa/framework" rel="noopener noreferrer">Waaseyaa</a> — a PHP framework monorepo of 52 packages — went from accumulated architectural drift to a governed, verifiable implementation platform.</p> <h3> 1. <a href="https://jonesrussell.github.io/blog/waaseyaa-governance-audit/" rel="noopener noreferrer">The audit that started everything</a> </h3> <p>What architectural drift looks like in a 52-package PHP monorepo, how the invariant-driven M1 audit was designed with frozen vocabularies before the first finding was written, what it found across five concern passes, and how M2 turned 36 findings into a dependency-ordered eight-milestone program.</p> <h3> 2. Eight milestones, one chain </h3> <p>How the remediation program ra
Google adds new Gemini features to Docs, Sheets, Slides, and Drive - Mashable
<a href="https://news.google.com/rss/articles/CBMijwFBVV95cUxNNXhlbTU5d2NUTnBHSERUczAyNVV0S3BVTUZubDVoRFhvU2dQUWp2TElwdDF2WDAzN1BfMmxXRGE0dnRtSHpRczJ3UVRIbzVBNm9UUHRTTEp4T0FORzhDTGRKa2swVXVXSWpETzI5NFVfWWUwTHNXTzh1RkExUXpzMkppLV81djBmV1hDLWkyWQ?oc=5" target="_blank">Google adds new Gemini features to Docs, Sheets, Slides, and Drive</a> <font color="#6f6f6f">Mashable</font>

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!