Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessI Audited 30+ Small Businesses on Their AI Visibility. Here's What Most Are Getting Wrong.Dev.to AIHow to Actually Monitor Your LLM Costs (Without a Spreadsheet)Dev.to AIОдин промпт приносит мне $500 в неделю на фрилансеDev.to AINetflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and AllMarkTechPostUnderstanding Data Modeling in Power BI: Joins, Relationships, and Schemas Explained.DEV CommunityHow to Supercharge Your AI Coding Workflow with Oh My CodexDev.to AIThe 11 steps that run every time you press Enter in Claude CodeDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIOptimizing Claude Code token usage: lessons learnedDEV CommunityAgents Bedrock AgentCore en mode VPC : attention aux coûts de NAT Gateway !DEV CommunityIntroduction to Python ProgrammingDev.to AIWhen a Conversation with AI Became ContinuityMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessI Audited 30+ Small Businesses on Their AI Visibility. Here's What Most Are Getting Wrong.Dev.to AIHow to Actually Monitor Your LLM Costs (Without a Spreadsheet)Dev.to AIОдин промпт приносит мне $500 в неделю на фрилансеDev.to AINetflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and AllMarkTechPostUnderstanding Data Modeling in Power BI: Joins, Relationships, and Schemas Explained.DEV CommunityHow to Supercharge Your AI Coding Workflow with Oh My CodexDev.to AIThe 11 steps that run every time you press Enter in Claude CodeDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIOptimizing Claude Code token usage: lessons learnedDEV CommunityAgents Bedrock AgentCore en mode VPC : attention aux coûts de NAT Gateway !DEV CommunityIntroduction to Python ProgrammingDev.to AIWhen a Conversation with AI Became ContinuityMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

I Analyzed 500 AI Coding Mistakes and Built an ESLint Plugin to Catch Them

DEV Communityby Rob SimpsonApril 4, 20265 min read1 views
Source Quiz

Here's a pattern you've probably seen: const results = items . map ( async ( item ) => { return await fetchItem ( item ); }); Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it. Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop. This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality. The Problem: AI Writes Code That Works, Not Code That's Right LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood. After reviewing several em

Here's a pattern you've probably seen:

const results = items.map(async (item) => {  return await fetchItem(item); });

Enter fullscreen mode

Exit fullscreen mode

Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it.

Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop.

This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality.

The Problem: AI Writes Code That Works, Not Code That's Right

LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood.

After reviewing several empirical studies on LLM-generated code bugs — including an analysis of 333 bugs and PromptHub's study of 558 incorrect snippets — I found clear patterns emerging:

Bug Type Frequency

Missing corner cases 15.3%

Misinterpretations 20.8%

Hallucinated objects/APIs 9.6%

Incorrect conditions High

Missing code blocks 40%+

The most frustrating part? Many of these are preventable at lint time.

The Solution: ESLint Rules Designed for AI-Generated Code

I built eslint-plugin-llm-core — an ESLint plugin with 20 rules specifically designed to catch the mistakes AI coding assistants make most often.

Not just generic best practices, but patterns I've seen repeatedly in AI-generated codebases:

  • Async/await misuse

  • Inconsistent error handling

  • Missing null checks

  • Magic numbers instead of named constants

  • Deep nesting instead of early returns

  • Empty catch blocks that swallow errors

  • Generic variable names that obscure intent

Example: The Async Array Callback Trap

// ❌ AI often writes this const userIds = users.map(async (user) => {  return await db.getUser(user.id); }); // userIds is Promise[] — not User[]

// ✅ What you actually need const userIds = await Promise.all( users.map((user) => db.getUser(user.id)) );`

Enter fullscreen mode

Exit fullscreen mode

The plugin catches this with no-async-array-callbacks:

57:27 error Avoid passing async functions to array methods llm-core/no-async-array-callbacks

This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.`

Enter fullscreen mode

Exit fullscreen mode

Notice the error message? It's designed to teach, not just complain. The goal is to help developers (and their AI assistants) understand why it's wrong.

Example: The Empty Catch Anti-Pattern

// ❌ AI often generates this try {  await processData(data); } catch (e) {  // TODO: handle error }

Enter fullscreen mode

Exit fullscreen mode

The no-empty-catch rule catches this:

63:11 error Empty catch block silently swallows errors llm-core/no-empty-catch

Unhandled errors make debugging difficult and can hide critical failures. Either handle the error, rethrow it, or log it with context.`

Enter fullscreen mode

Exit fullscreen mode

Example: Deep Nesting Instead of Early Returns

// ❌ AI loves nesting function processData(data: Data | null) {  if (data) {  if (data.items) {  if (data.items.length > 0) {  return data.items.map(processItem);  }  }  }  return []; }

// ✅ Early returns are cleaner function processData(data: Data | null) { if (!data?.items?.length) return []; return data.items.map(processItem); }`

Enter fullscreen mode

Exit fullscreen mode

The prefer-early-return rule encourages the flatter pattern.

The Research Behind the Rules

Each rule is backed by observed patterns in LLM-generated code:

Rule Bug Pattern Addressed

no-async-array-callbacks Missing Promise.all, incorrect async flow

no-empty-catch Silent error swallowing

no-magic-numbers Unmaintainable constants

prefer-early-return Deep nesting, unclear control flow

prefer-unknown-in-catch

any typed catch params

throw-error-objects Throwing strings instead of Error instances

structured-logging Inconsistent log formats

consistent-exports Mixed default/named exports

explicit-export-types Missing return types on public functions

no-commented-out-code Dead code accumulation

Full rule documentation: github.com/pertrai1/eslint-plugin-llm-core

Why Not Just Use typescript-eslint?

Great question. typescript-eslint is excellent — this plugin is designed to complement it, not replace it.

The difference is focus:

typescript-eslint eslint-plugin-llm-core

Focus TypeScript language correctness AI coding pattern prevention

Error messages Technical, spec-focused Educational, context-rich

Rule design Language spec compliance Observed LLM bug patterns

You should use both. typescript-eslint catches TypeScript-specific issues. llm-core catches patterns that LLMs repeatedly get wrong — regardless of whether they're technically valid TypeScript.

Getting Started

npm install -D eslint-plugin-llm-core

Enter fullscreen mode

Exit fullscreen mode

// eslint.config.js import llmCore from 'eslint-plugin-llm-core';

export default [ { plugins: { 'llm-core': llmCore, }, rules: { ...llmCore.configs.recommended.rules, }, }, ];`

Enter fullscreen mode

Exit fullscreen mode

That's it. Zero config for the recommended ruleset.

The Bigger Picture: Teaching AI Better Habits

Here's the interesting part: these rules don't just catch mistakes. They teach.

When your AI assistant sees the error messages:

Avoid passing async functions to array methods. This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.

Enter fullscreen mode

Exit fullscreen mode

It learns. Next time, it writes the correct pattern.

In looped agent workflows — where AI iteratively writes, tests, and fixes code — this feedback loop compounds. Each lint error becomes a teaching moment.

What's Next

The plugin is early but functional. Current focus areas:

  • Auto-fixes for fixable rules

  • More logging library detection (Pino, Winston, Bunyan)

  • Additional rules based on ongoing research

  • Evidence gathering on whether rules actually improve AI-generated code quality

If you're working with AI coding assistants — Cursor, Claude Code, Copilot, or others — I'd love your feedback on what patterns you've seen them get wrong.

Try It

npm install -D eslint-plugin-llm-core

Enter fullscreen mode

Exit fullscreen mode

GitHub: pertrai1/eslint-plugin-llm-core

npm: eslint-plugin-llm-core

Built this? Hate it? Have ideas for rules I missed? Open an issue or reach out. I'm actively looking for contributors who've seen AI write weird code.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

claudeproductcopilot

Knowledge Map

Knowledge Map
TopicsEntitiesSource
I Analyzed …claudeproductcopilotassistantanalysisreviewDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 349 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in AI Tools

Один промпт приносит мне $500 в неделю на фрилансе
AI ToolsLive

Один промпт приносит мне $500 в неделю на фрилансе

Я закрыл ноутбук в пятницу в 23:40, чувствуя усталость после 11 часов непрерывной работы. Это был проект для клиента из Берлина, за который я получил $1200. Но настоящим откровением стало то, что я нашёл в старом Notion: черновик того же промпта, написанный за 40 минут, который почти не отличался от финальной версии. Я три недели работал, переживая, что сделать его сложнее и совершеннее, хотя разница была в двух уточняющих фразах. Страх, что "это слишком просто", съел уйму времени и сил. Проблема: Занижение ценника и страх простоты Работая на фрилансе, я часто сталкивался с тем, что недооценивал свою работу. Был уверен, что если что-то даётся легко, значит это не заслуживает высокой оплаты. Один раз я потерял клиента на $800 , потому что слишком долго тянул с проектом, боясь, что сделаю ег