From Consensus to Split Decisions: ABC-Stratified Sentiment in Holocaust Oral Histories
arXiv:2603.28913v1 Announce Type: new Abstract: Polarity detection becomes substantially more challenging under domain shift, particularly in heterogeneous, long-form narratives with complex discourse structure, such as Holocaust oral histories. This paper presents a corpus-scale diagnostic study of off-the-shelf sentiment classifiers on long-form Holocaust oral histories, using three pretrained transformer-based polarity classifiers on a corpus of 107,305 utterances and 579,013 sentences. After assembling model outputs, we introduce an agreement-based stability taxonomy (ABC) to stratify inter-model output stability. We report pairwise percent agreement, Cohen kappa, Fleiss kappa, and row-normalized confusion matrices to localize systematic disagreement. As an auxiliary descriptive signal
View PDF HTML (experimental)
Abstract:Polarity detection becomes substantially more challenging under domain shift, particularly in heterogeneous, long-form narratives with complex discourse structure, such as Holocaust oral histories. This paper presents a corpus-scale diagnostic study of off-the-shelf sentiment classifiers on long-form Holocaust oral histories, using three pretrained transformer-based polarity classifiers on a corpus of 107,305 utterances and 579,013 sentences. After assembling model outputs, we introduce an agreement-based stability taxonomy (ABC) to stratify inter-model output stability. We report pairwise percent agreement, Cohen kappa, Fleiss kappa, and row-normalized confusion matrices to localize systematic disagreement. As an auxiliary descriptive signal, a T5-based emotion classifier is applied to stratified samples from each agreement stratum to compare emotion distributions across strata. The combination of multi-model label triangulation and the ABC taxonomy provides a cautious, operational framework for characterizing where and how sentiment models diverge in sensitive historical narratives. Inter-model agreement is low to moderate overall and is driven primarily by boundary decisions around neutrality.
Subjects:
Computation and Language (cs.CL)
Cite as: arXiv:2603.28913 [cs.CL]
(or arXiv:2603.28913v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2603.28913
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Daban Q. Jaff Mr [view email] [v1] Mon, 30 Mar 2026 18:44:22 UTC (319 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeltransformerannounceOllama v0.19
<p> Massive local model speedup on Apple Silicon with MLX </p> <p> <a href="https://www.producthunt.com/products/ollama?utm_campaign=producthunt-atom-posts-feed&utm_medium=rss-feed&utm_source=producthunt-atom-posts-feed">Discussion</a> | <a href="https://www.producthunt.com/r/p/1112922?app_id=339">Link</a> </p>
Google Veo 3.1 Lite
<p> Google's most cost-effective video generation model </p> <p> <a href="https://www.producthunt.com/products/google-pay-2?utm_campaign=producthunt-atom-posts-feed&utm_medium=rss-feed&utm_source=producthunt-atom-posts-feed">Discussion</a> | <a href="https://www.producthunt.com/r/p/1112544?app_id=339">Link</a> </p>
Hugging Face Releases TRL v1.0: A Unified Post-Training Stack for SFT, Reward Modeling, DPO, and GRPO Workflows
Hugging Face has officially released TRL (Transformer Reinforcement Learning) v1.0, marking a pivotal transition for the library from a research-oriented repository to a stable, production-ready framework. For AI professionals and developers, this release codifies the Post-Training pipeline—the essential sequence of Supervised Fine-Tuning (SFT), Reward Modeling, and Alignment—into a unified, standardized API. In the early stages […] The post Hugging Face Releases TRL v1.0: A Unified Post-Training Stack for SFT, Reward Modeling, DPO, and GRPO Workflows appeared first on MarkTechPost .
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
Australian govt partners Anthropic on AI safety, research and infrastructure - Telecompaper
<a href="https://news.google.com/rss/articles/CBMiugFBVV95cUxNUjhfY3dKRFdBV3hIOW1PMXE4M1g2SGZkbjYxTWozbFBKdW1HN0RrU0VfdVRfbEt6MW0tRUhiQWsxUXppMzlnQk10SnVTZjY5MXBNVlYzWEtOeUZYSXBqTFZZb2lqX2hnRlZjV0pWMzkzNE5CNDl0TWV2MEczVHI2eGVIR0pZeFJTUE90VFNWSUkxdnloZzlYcHB4b0VRdC1QcXYxME0wRlFGVnAwaGhiYURNT1lYRkdOeEE?oc=5" target="_blank">Australian govt partners Anthropic on AI safety, research and infrastructure</a> <font color="#6f6f6f">Telecompaper</font>

Monocular Building Height Estimation from PhiSat-2 Imagery: Dataset and Method
arXiv:2603.29245v1 Announce Type: new Abstract: Monocular building height estimation from optical imagery is important for urban morphology characterization but remains challenging due to ambiguous height cues, large inter-city variations in building morphology, and the long-tailed distribution of building heights. PhiSat-2 is a promising open-access data source for this task because of its global coverage, 4.75 m spatial resolution, and seven-band spectral observations, yet its potential has not been systematically evaluated. To address this gap, we construct a PhiSat-2-Height dataset (PHDataset) and propose a Two-Stream Ordinal Network (TSONet). PHDataset contains 9,475 co-registered image-label patch pairs from 26 cities worldwide. TSONet jointly models footprint segmentation and height

Deep Learning-Based Anomaly Detection in Spacecraft Telemetry on Edge Devices
arXiv:2603.29375v1 Announce Type: new Abstract: Spacecraft anomaly detection is critical for mission safety, yet deploying sophisticated models on-board presents significant challenges due to hardware constraints. This paper investigates three approaches for spacecraft telemetry anomaly detection -- forecasting & threshold, direct classification, and image classification -- and optimizes them for edge deployment using multi-objective neural architecture optimization on the European Space Agency Anomaly Dataset. Our baseline experiments demonstrate that forecasting & threshold achieves superior detection performance (92.7% Corrected Event-wise F0.5-score (CEF0.5)) [1] compared to alternatives. Through Pareto-optimal architecture optimization, we dramatically reduced computational requiremen

Multi-Layered Memory Architectures for LLM Agents: An Experimental Evaluation of Long-Term Context Retention
arXiv:2603.29194v1 Announce Type: new Abstract: Long-horizon dialogue systems suffer from semanticdrift and unstable memory retention across extended sessions. This paper presents a Multi-Layer Memory Framework that decomposes dialogue history into working, episodic, and semantic layers with adaptive retrieval gating and retention regularization. The architecture controls cross-session drift while maintaining bounded context growth and computational efficiency. Experiments on LOCOMO, LOCCO, and LoCoMo show improved performance, achieving 46.85 Success Rate, 0.618 overall F1 with 0.594 multi-hop F1, and 56.90% six-period retention while reducing false memory rate to 5.1% and context usage to 58.40%. Results confirm enhanced long-term retention and reasoning stability under constrained conte
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!