ENEIDE: A High Quality Silver Standard Dataset for Named Entity Recognition and Linking in Historical Italian
arXiv:2603.29801v1 Announce Type: new Abstract: This paper introduces ENEIDE (Extracting Named Entities from Italian Digital Editions), a silver standard dataset for Named Entity Recognition and Linking (NERL) in historical Italian texts. The corpus comprises 2,111 documents with over 8,000 entity annotations semi-automatically extracted from two scholarly digital editions: Digital Zibaldone, the philosophical diary of the Italian poet Giacomo Leopardi (1798--1837), and Aldo Moro Digitale, the complete works of the Italian politician Aldo Moro (1916--1978). Annotations cover multiple entity types (person, location, organization, literary work) linked to Wikidata identifiers, including NIL entities that cannot be mapped to the knowledge graph. To the best of our knowledge, ENEIDE represents
View PDF HTML (experimental)
Abstract:This paper introduces ENEIDE (Extracting Named Entities from Italian Digital Editions), a silver standard dataset for Named Entity Recognition and Linking (NERL) in historical Italian texts. The corpus comprises 2,111 documents with over 8,000 entity annotations semi-automatically extracted from two scholarly digital editions: Digital Zibaldone, the philosophical diary of the Italian poet Giacomo Leopardi (1798--1837), and Aldo Moro Digitale, the complete works of the Italian politician Aldo Moro (1916--1978). Annotations cover multiple entity types (person, location, organization, literary work) linked to Wikidata identifiers, including NIL entities that cannot be mapped to the knowledge graph. To the best of our knowledge, ENEIDE represents the first multi-domain, publicly available NERL dataset for historical Italian with training, development, and test splits. We present a methodology for semi-automatic annotations extraction from manually curated scholarly digital editions, including quality control and annotation enhancement procedures. Baseline experiments using state-of-the-art models demonstrate the dataset's challenge for NERL and the gap between zero-shot approaches and fine-tuned models. The dataset's diachronic coverage spanning two centuries makes it particularly suitable for temporal entity disambiguation and cross-domain evaluation. ENEIDE is released under a CC BY-NC-SA 4.0 license.
Subjects:
Computation and Language (cs.CL)
Cite as: arXiv:2603.29801 [cs.CL]
(or arXiv:2603.29801v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2603.29801
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Cristian Santini [view email] [v1] Tue, 31 Mar 2026 14:32:34 UTC (245 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeltrainingrelease![Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-matrix-rain-CvjLrWJiXfamUnvj5xT9J9.webp)
Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]
Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a person reacts to situations, then uses that pattern to simulate how they would respond to something new. You collect real stimulus and response pairs. A stimulus is an event. A response is what they said or did. The key is linking them properly. Then you convert both into structured signals instead of raw text. This is where TRIBE v2 comes in. It was released by Meta about two weeks ago, trained on fMRI scan data, and it can take text, audio, images, and video and estimate how a human brain would process that input. On its own, it reflects an average brain. It does not know the individual. COGNEX uses TRIBE to first map every stimulus and response into this s

How AI Actually Thinks - Explained So a 13-Year-Old Gets It
Tokens, training, context windows, and temperature — the four concepts that explain everything about large language models. You know how your phone suggests the next word when you’re texting? Type “I’m going to the” and it suggests “store” or “park.” Now imagine that autocomplete was trained on every book, every website, every conversation ever written — and instead of suggesting one word, it could write entire essays, solve math problems, and generate working code. That’s fundamentally what a Large Language Model does. And once you understand four concepts — tokens, training, context windows, and temperature — you’ll know more about how AI works than 95% of people who use it daily. No PhD required. Concept 1: Tokens — How AI Reads AI doesn’t read letters or words the way you do. It reads
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!