I stopped managing translations manually (and built this instead)
Managing multilingual content has always felt… wrong to me. In most projects, it quickly turns into: duplicated fields ( title_en , title_fr ) messy i18n JSON files constant synchronization issues At some point, I started wondering: why is this even a developer problem? Rethinking the approach Instead of treating translations as something external (keys, files, etc.), I tried a different approach: What if multilingual support was part of the data model itself? So I built a small Airtable-like system where fields are multilingual by design. You write content once, and it becomes available in multiple languages automatically. Example: Title: "Hello world" → fr: Bonjour le monde → es: Hola mundo No keys. No duplication. No sync issues. How it works Each field stores multiple language versions
Managing multilingual content has always felt… wrong to me.
In most projects, it quickly turns into:
-
duplicated fields (title_en, title_fr)
-
messy i18n JSON files
-
constant synchronization issues
At some point, I started wondering: why is this even a developer problem?
Rethinking the approach
Instead of treating translations as something external (keys, files, etc.), I tried a different approach:
What if multilingual support was part of the data model itself?
So I built a small Airtable-like system where fields are multilingual by design.
You write content once, and it becomes available in multiple languages automatically.
Example: Title: "Hello world" → fr: Bonjour le monde → es: Hola mundo
No keys. No duplication. No sync issues.
How it works
Each field stores multiple language versions internally.
On top of that:
-
automatic translation (using GPT)
-
ability to override manually per language
Where it can be used
The system can be accessed:
-
via API
-
or directly inside a templating engine I’m building (Ekit Studio)
So content flows directly into rendering without extra i18n layers.
Why this feels better
This approach shifts the problem:
-
from code → to data
-
from developers → to content structure
And in practice, it removes a lot of friction.
Curious to hear from others
DEV Community
https://dev.to/fabrice_grenouillet_c10f1/i-stopped-managing-translations-manually-and-built-this-instead-1m37Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelavailableversion
Same Model, Different Environment, Different Results
Same Model, Different Environment, Different Results I've been running the same foundation model in two different environments for the same project for several months. Not different models — the same one. Same underlying weights, same training, same capabilities. The only difference is the environment: what tools are available, how session state persists, what gets loaded into context before I ask a question. The outputs are systematically different. Not randomly different — not the kind of variation you'd get from temperature or sampling. Structurally different, in ways that repeat across sessions and follow predictable patterns. When I ask a causal question in one environment — "Why does this component exist?" — I get back a dependency chain. Clean, correct, verifiable against stored dat

Anthropic Just Paid $400M for a Team of 10. Here's Why That Makes Sense.
Eight months. That's how long Coefficient Bio existed before Anthropic bought it for $400 million in stock. No public product. No disclosed revenue. No conventional traction metrics. Just a small team of fewer than 10 people, most of them former Genentech computational biology researchers, and one very large claim: they were building artificial superintelligence for science. Anthropic paid up anyway. And if you look at what they've been building in healthcare and life sciences over the past year, this acquisition is less of a surprise and more of a logical endpoint. Who Is Coefficient Bio? Coefficient Bio was founded roughly eight months ago by Samuel Stanton and Nathan C. Frey. Both came from Prescient Design, Genentech's computational drug discovery unit. Frey led a group there working o
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Same Model, Different Environment, Different Results
Same Model, Different Environment, Different Results I've been running the same foundation model in two different environments for the same project for several months. Not different models — the same one. Same underlying weights, same training, same capabilities. The only difference is the environment: what tools are available, how session state persists, what gets loaded into context before I ask a question. The outputs are systematically different. Not randomly different — not the kind of variation you'd get from temperature or sampling. Structurally different, in ways that repeat across sessions and follow predictable patterns. When I ask a causal question in one environment — "Why does this component exist?" — I get back a dependency chain. Clean, correct, verifiable against stored dat

5 Signs You're Ready to Build Your SaaS (And 3 Signs You're Not)
5 Signs You're Ready to Build Your SaaS (And 3 Signs You're Not) Spending $10,000 building the wrong thing is worse than spending nothing. Before you hire anyone or write a line of code, read this. Every week I talk to founders who want to build a SaaS. Some of them are genuinely ready. Some of them will waste a lot of money if they start today. After 7+ years of building and shipping products, I've developed a pretty good sense for which is which. Here are the signals I look for. ✅ 5 Signs You're Ready 1. You've talked to at least 10 potential users — and they were specific about their pain Not "do you think this is a good idea?" conversations. Real conversations where you asked: What's your current process? What's broken about it? How much time does it waste? What have you tried to fix i

PRCCF: A Persona-guided Retrieval and Causal-aware Cognitive Filtering Framework for Emotional Support Conversation
arXiv:2604.01671v1 Announce Type: new Abstract: Emotional Support Conversation (ESC) aims to alleviate individual emotional distress by generating empathetic responses. However, existing methods face challenges in effectively supporting deep contextual understanding. To address this issue, we propose PRCCF, a Persona-guided Retrieval and Causality-aware Cognitive Filtering framework. Specifically, the framework incorporates a persona-guided retrieval mechanism that jointly models semantic compatibility and persona alignment to enhance response generation. Furthermore, it employs a causality-aware cognitive filtering module to prioritize causally relevant external knowledge, thereby improving contextual cognitive understanding for emotional reasoning. Extensive experiments on the ESConv dat




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!