FIRMED: A Peak-Centered Multimodal Dataset with Fine-Grained Annotation for Emotion Recognition
arXiv:2507.02350v3 Announce Type: replace Abstract: Traditional video-induced physiological datasets usually rely on whole-trial labels, which introduce temporal label noise in dynamic emotion recognition. We present FIRMED, a peak-centered multimodal dataset based on an immediate-recall annotation paradigm, with synchronized EEG, ECG, GSR, PPG, and facial recordings from 35 participants. FIRMED provides event-centered timestamps, emotion labels, and intensity annotations, and its annotation quality is supported by subjective and physiological validation. Benchmark experiments show that FIRMED consistently outperforms whole-trial labeling, yielding an average gain of 3.8 percentage points across eight EEG-based classifiers, with further improvements under multimodal fusion. FIRMED provides
View PDF HTML (experimental)
Abstract:Traditional video-induced physiological datasets usually rely on whole-trial labels, which introduce temporal label noise in dynamic emotion recognition. We present FIRMED, a peak-centered multimodal dataset based on an immediate-recall annotation paradigm, with synchronized EEG, ECG, GSR, PPG, and facial recordings from 35 participants. FIRMED provides event-centered timestamps, emotion labels, and intensity annotations, and its annotation quality is supported by subjective and physiological validation. Benchmark experiments show that FIRMED consistently outperforms whole-trial labeling, yielding an average gain of 3.8 percentage points across eight EEG-based classifiers, with further improvements under multimodal fusion. FIRMED provides a practical benchmark for temporally localized supervision in multimodal affective computing.
Subjects:
Human-Computer Interaction (cs.HC)
Cite as: arXiv:2507.02350 [cs.HC]
(or arXiv:2507.02350v3 [cs.HC] for this version)
https://doi.org/10.48550/arXiv.2507.02350
arXiv-issued DOI via DataCite
Submission history
From: Hao Tang [view email] [v1] Thu, 3 Jul 2025 06:23:51 UTC (10,401 KB) [v2] Wed, 5 Nov 2025 08:08:25 UTC (13,708 KB) [v3] Tue, 31 Mar 2026 12:06:39 UTC (7,784 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
benchmarkannouncemultimodalYour AI Just Wrote 500 Lines of Code. Can You Prove Any of It Works?
Image Disclaimer: This banner was conceptualized by the author and rendered using Gemini 3 Flash Image. A framework for figuring out when AI-generated code can be formally verified — and when you’re kidding yourself. I’ve been thinking about a problem that’s been bugging me for a while. We’re all using AI to write code now. Copilot, Claude, ChatGPT, internal tools — whatever your flavor. And the code is… surprisingly good? It passes tests, it looks reasonable, it usually does what you asked for. But “usually” is doing a lot of heavy lifting in that sentence. Here’s the thing nobody talks about at the stand-up: testing can show you bugs exist. It cannot prove they don’t. That’s not a philosophical position. It’s a mathematical fact, courtesy of Dijkstra, circa 1972. And it matters a lot mor
AI-Generated Go Serialization: Zero Boilerplate, Maximum Speed
<p>I wanted to share a quick story about a weekend experiment.</p> <p>There is a library called <a href="https://github.com/ymz-ncnk/mok" rel="noopener noreferrer">mok</a> that lacks a code generator, so every time you want to mock an interface, it requires you to write all the boilerplate yourself. It's simple but tedious work. I usually just offload it to an AI agent, and with a small example, it completes this task exceptionally well.</p> <p>This got me thinking - could AI handle something more complex, like writing serialization code? If it could, we'd basically get the best of both worlds: the raw speed of generated code, with the same simplicity you get from reflection-based libraries.</p> <p>But is it safe to trust AI with something as sensitive as serialization logic? With all its
Google Drive can now detect ransomware and roll back your files
Google officially announced its new anti-ransomware protections in September 2025, and the company is now making these tools available to Workspace customers using Google Drive for Desktop. The security features leverage a specially trained AI model, which has been further developed and refined over the past few months. Read Entire Article
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Your AI Just Wrote 500 Lines of Code. Can You Prove Any of It Works?
Image Disclaimer: This banner was conceptualized by the author and rendered using Gemini 3 Flash Image. A framework for figuring out when AI-generated code can be formally verified — and when you’re kidding yourself. I’ve been thinking about a problem that’s been bugging me for a while. We’re all using AI to write code now. Copilot, Claude, ChatGPT, internal tools — whatever your flavor. And the code is… surprisingly good? It passes tests, it looks reasonable, it usually does what you asked for. But “usually” is doing a lot of heavy lifting in that sentence. Here’s the thing nobody talks about at the stand-up: testing can show you bugs exist. It cannot prove they don’t. That’s not a philosophical position. It’s a mathematical fact, courtesy of Dijkstra, circa 1972. And it matters a lot mor

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!