I Analyzed 500 AI Coding Mistakes and Built an ESLint Plugin to Catch Them
Here's a pattern you've probably seen: const results = items . map ( async ( item ) => { return await fetchItem ( item ); }); Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it. Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop. This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality. The Problem: AI Writes Code That Works, Not Code That's Right LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood. After reviewing several em
Here's a pattern you've probably seen:
const results = items.map(async (item) => { return await fetchItem(item); });const results = items.map(async (item) => { return await fetchItem(item); });Enter fullscreen mode
Exit fullscreen mode
Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it.
Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop.
This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality.
The Problem: AI Writes Code That Works, Not Code That's Right
LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood.
After reviewing several empirical studies on LLM-generated code bugs — including an analysis of 333 bugs and PromptHub's study of 558 incorrect snippets — I found clear patterns emerging:
Bug Type Frequency
Missing corner cases 15.3%
Misinterpretations 20.8%
Hallucinated objects/APIs 9.6%
Incorrect conditions High
Missing code blocks 40%+
The most frustrating part? Many of these are preventable at lint time.
The Solution: ESLint Rules Designed for AI-Generated Code
I built eslint-plugin-llm-core — an ESLint plugin with 20 rules specifically designed to catch the mistakes AI coding assistants make most often.
Not just generic best practices, but patterns I've seen repeatedly in AI-generated codebases:
-
Async/await misuse
-
Inconsistent error handling
-
Missing null checks
-
Magic numbers instead of named constants
-
Deep nesting instead of early returns
-
Empty catch blocks that swallow errors
-
Generic variable names that obscure intent
Example: The Async Array Callback Trap
// ❌ AI often writes this const userIds = users.map(async (user) => { return await db.getUser(user.id); }); // userIds is Promise[] — not User[]// ❌ AI often writes this const userIds = users.map(async (user) => { return await db.getUser(user.id); }); // userIds is Promise[] — not User[]// ✅ What you actually need const userIds = await Promise.all( users.map((user) => db.getUser(user.id)) );`
Enter fullscreen mode
Exit fullscreen mode
The plugin catches this with no-async-array-callbacks:
57:27 error Avoid passing async functions to array methods llm-core/no-async-array-callbacks
This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.`
Enter fullscreen mode
Exit fullscreen mode
Notice the error message? It's designed to teach, not just complain. The goal is to help developers (and their AI assistants) understand why it's wrong.
Example: The Empty Catch Anti-Pattern
// ❌ AI often generates this try { await processData(data); } catch (e) { // TODO: handle error }// ❌ AI often generates this try { await processData(data); } catch (e) { // TODO: handle error }Enter fullscreen mode
Exit fullscreen mode
The no-empty-catch rule catches this:
63:11 error Empty catch block silently swallows errors llm-core/no-empty-catch
Unhandled errors make debugging difficult and can hide critical failures. Either handle the error, rethrow it, or log it with context.`
Enter fullscreen mode
Exit fullscreen mode
Example: Deep Nesting Instead of Early Returns
// ❌ AI loves nesting function processData(data: Data | null) { if (data) { if (data.items) { if (data.items.length > 0) { return data.items.map(processItem); } } } return []; }// ❌ AI loves nesting function processData(data: Data | null) { if (data) { if (data.items) { if (data.items.length > 0) { return data.items.map(processItem); } } } return []; }// ✅ Early returns are cleaner function processData(data: Data | null) { if (!data?.items?.length) return []; return data.items.map(processItem); }`
Enter fullscreen mode
Exit fullscreen mode
The prefer-early-return rule encourages the flatter pattern.
The Research Behind the Rules
Each rule is backed by observed patterns in LLM-generated code:
Rule Bug Pattern Addressed
no-async-array-callbacks
Missing Promise.all, incorrect async flow
no-empty-catch
Silent error swallowing
no-magic-numbers
Unmaintainable constants
prefer-early-return
Deep nesting, unclear control flow
prefer-unknown-in-catch
any typed catch params
throw-error-objects
Throwing strings instead of Error instances
structured-logging
Inconsistent log formats
consistent-exports
Mixed default/named exports
explicit-export-types
Missing return types on public functions
no-commented-out-code
Dead code accumulation
Full rule documentation: github.com/pertrai1/eslint-plugin-llm-core
Why Not Just Use typescript-eslint?
Great question. typescript-eslint is excellent — this plugin is designed to complement it, not replace it.
The difference is focus:
typescript-eslint eslint-plugin-llm-core
Focus TypeScript language correctness AI coding pattern prevention
Error messages Technical, spec-focused Educational, context-rich
Rule design Language spec compliance Observed LLM bug patterns
You should use both. typescript-eslint catches TypeScript-specific issues. llm-core catches patterns that LLMs repeatedly get wrong — regardless of whether they're technically valid TypeScript.
Getting Started
npm install -D eslint-plugin-llm-core
Enter fullscreen mode
Exit fullscreen mode
// eslint.config.js import llmCore from 'eslint-plugin-llm-core';// eslint.config.js import llmCore from 'eslint-plugin-llm-core';export default [ { plugins: { 'llm-core': llmCore, }, rules: { ...llmCore.configs.recommended.rules, }, }, ];`
Enter fullscreen mode
Exit fullscreen mode
That's it. Zero config for the recommended ruleset.
The Bigger Picture: Teaching AI Better Habits
Here's the interesting part: these rules don't just catch mistakes. They teach.
When your AI assistant sees the error messages:
Avoid passing async functions to array methods. This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.Avoid passing async functions to array methods. This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.Enter fullscreen mode
Exit fullscreen mode
It learns. Next time, it writes the correct pattern.
In looped agent workflows — where AI iteratively writes, tests, and fixes code — this feedback loop compounds. Each lint error becomes a teaching moment.
What's Next
The plugin is early but functional. Current focus areas:
-
Auto-fixes for fixable rules
-
More logging library detection (Pino, Winston, Bunyan)
-
Additional rules based on ongoing research
-
Evidence gathering on whether rules actually improve AI-generated code quality
If you're working with AI coding assistants — Cursor, Claude Code, Copilot, or others — I'd love your feedback on what patterns you've seen them get wrong.
Try It
npm install -D eslint-plugin-llm-core
Enter fullscreen mode
Exit fullscreen mode
GitHub: pertrai1/eslint-plugin-llm-core
npm: eslint-plugin-llm-core
Built this? Hate it? Have ideas for rules I missed? Open an issue or reach out. I'm actively looking for contributors who've seen AI write weird code.
DEV Community
https://dev.to/pertrai1/i-analyzed-500-ai-coding-mistakes-and-built-an-eslint-plugin-to-catch-them-jmeSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeproductcopilot
🦀 Rust Foundations — The Stuff That Finally Made Things Click
"Rust compiler and Clippy are the biggest tsunderes — they'll shout at you for every small mistake, but in the end… they just want your code to be perfect." Why I Even Started Rust I didn't pick Rust out of curiosity or hype. I had to. I'm working as a Rust dev at Garden Finance , where I built part of a Wallet-as-a-Service infrastructure. Along with an Axum backend, we had this core Rust crate ( standard-rs ) handling signing and broadcasting transactions across: Bitcoin EVM chains Sui Solana Starknet And suddenly… memory safety wasn't "nice to have" anymore. It was everything. Rust wasn't just a language — it was a guarantee . But yeah… in the beginning? It felt like the compiler hated me :( So I'm writing this to explain Rust foundations in the simplest way possible — from my personal n

OpenClaw 2026.3.31: Task Flows, Locked-Down Installs, and the Security Release Your Agent Needed
OpenClaw 2026.3.31 dropped yesterday, and this one's different. Where the last few releases added capabilities — new channels, new models, new tools — this release is about control . Specifically: controlling what your agent installs, what your nodes can access, and how background work is tracked. If you run agents in production, this is the update you've been waiting for. Task Flows: Your Agent's Work Finally Has a Paper Trail This is the headline feature and it's been a long time coming. Background tasks — sub-agents, cron jobs, ACP sessions, CLI background runs — were all tracked separately. Different systems, different lifecycle management, different ways things could silently break. Not anymore. Everything now lives under one SQLite-backed ledger . You can run openclaw flows list , op

Tired of Zillow Blocking Scrapers — Here's What Actually Works in 2026
If you've ever tried scraping Zillow with BeautifulSoup or Selenium, you know the pain. CAPTCHAs, IP bans, constantly changing HTML selectors, headless browser detection — it's an arms race you're not going to win. I spent way too long fighting anti-bot systems before switching to an API-based approach. This post walks through how to pull Zillow property data, search listings, get Zestimates, and export everything to CSV/Excel — all with plain Python and zero browser automation. What You'll Need Python 3.7+ The requests library ( pip install requests ) A free API key from RealtyAPI That's it. No Selenium. No Playwright. No proxy rotation. Getting Started: Your First Property Lookup Let's start simple — get full property details for a single address: import requests url = " https://zillow.r
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in AI Tools

Один промпт приносит мне $500 в неделю на фрилансе
Я закрыл ноутбук в пятницу в 23:40, чувствуя усталость после 11 часов непрерывной работы. Это был проект для клиента из Берлина, за который я получил $1200. Но настоящим откровением стало то, что я нашёл в старом Notion: черновик того же промпта, написанный за 40 минут, который почти не отличался от финальной версии. Я три недели работал, переживая, что сделать его сложнее и совершеннее, хотя разница была в двух уточняющих фразах. Страх, что "это слишком просто", съел уйму времени и сил. Проблема: Занижение ценника и страх простоты Работая на фрилансе, я часто сталкивался с тем, что недооценивал свою работу. Был уверен, что если что-то даётся легко, значит это не заслуживает высокой оплаты. Один раз я потерял клиента на $800 , потому что слишком долго тянул с проектом, боясь, что сделаю ег




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!