Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessShow HN: Running local OpenClaw together with remote agents in an open networkHacker NewsA folk musician became a target for AI fakes and a copyright trollThe Verge AIWhat Teens Are Doing With Those Role-Playing ChatbotsNYT TechnologyDesktop Canary v2.1.48-canary.35LobeChat ReleasesPlease someone recommend me a good model for Linux Mint + 12 GB RAM + 3 GB VRAM + GTX 1050 setup.Reddit r/LocalLLaMAGemma 4: The End of the Cloud Monopoly?Towards AIShow HN: A game where you build a GPUHacker News12,000 AI-generated blog posts added in a single commitHacker Newstrunk/3c9726cdf76b01c44fac8473c2f3d6d11249099e: Replace erase idiom for map/set with erase_if (#179373)PyTorch ReleasesBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AII Can't Write Code. But I Built a 100,000-Line Terminal IDE on My Phone.Dev.to AII Built a Free AI Tool That Turns One Blog Post Into 30 Pieces of ContentDev.to AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessShow HN: Running local OpenClaw together with remote agents in an open networkHacker NewsA folk musician became a target for AI fakes and a copyright trollThe Verge AIWhat Teens Are Doing With Those Role-Playing ChatbotsNYT TechnologyDesktop Canary v2.1.48-canary.35LobeChat ReleasesPlease someone recommend me a good model for Linux Mint + 12 GB RAM + 3 GB VRAM + GTX 1050 setup.Reddit r/LocalLLaMAGemma 4: The End of the Cloud Monopoly?Towards AIShow HN: A game where you build a GPUHacker News12,000 AI-generated blog posts added in a single commitHacker Newstrunk/3c9726cdf76b01c44fac8473c2f3d6d11249099e: Replace erase idiom for map/set with erase_if (#179373)PyTorch ReleasesBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AII Can't Write Code. But I Built a 100,000-Line Terminal IDE on My Phone.Dev.to AII Built a Free AI Tool That Turns One Blog Post Into 30 Pieces of ContentDev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

I Built an AI Content Pipeline That Publishes 4 SEO-Optimized Articles Per Day — Here's the Architecture

DEV Communityby The Catalyst ProjectApril 4, 20266 min read0 views
Source Quiz

I Built an AI Content Pipeline That Publishes 4 SEO-Optimized Articles Per Day — Here's the Architecture I'm a chemical engineer who taught himself to code. Six months ago I started building Catalyst OS — a life optimization platform with 106 free calculators, 225 interactive learning modules, and a premium AI journaling tool. The problem was content. I needed hundreds of articles to drive organic traffic, and writing them manually at 2-3 hours each wasn't going to work. So I built an automated content pipeline that generates, publishes, optimizes for SEO, pings search engines, generates social posts, and notifies me via Telegram — four times a day, zero manual intervention. Here's exactly how it works. The Stack n8n (self-hosted workflow automation) — orchestrates everything Claude Sonnet

I Built an AI Content Pipeline That Publishes 4 SEO-Optimized Articles Per Day — Here's the Architecture

I'm a chemical engineer who taught himself to code. Six months ago I started building Catalyst OS — a life optimization platform with 106 free calculators, 225 interactive learning modules, and a premium AI journaling tool. The problem was content. I needed hundreds of articles to drive organic traffic, and writing them manually at 2-3 hours each wasn't going to work.

So I built an automated content pipeline that generates, publishes, optimizes for SEO, pings search engines, generates social posts, and notifies me via Telegram — four times a day, zero manual intervention.

Here's exactly how it works.

The Stack

  • n8n (self-hosted workflow automation) — orchestrates everything

  • Claude Sonnet 4 (Anthropic API) — generates the actual content

  • Supabase (PostgreSQL) — stores articles, topics, and metadata

  • Next.js 15 — renders articles with SSR and structured data

  • IndexNow API — pings Bing/Yandex for instant indexing

  • Resend — transactional email

  • Telegram Bot API — real-time notifications

Architecture Overview

┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │ Schedule │────→│ Topic Bank │────→│ Config Merge │ │ (4x daily) │ │ (Supabase) │ │ (prompts + │ └─────────────┘ └──────────────┘ │ guardrails) │  └────────┬────────┘  │  ┌────────▼────────┐  │ Claude Sonnet 4 │  │ (generation) │  └────────┬────────┘  │  ┌──────────────────────────────┤  │ │  ┌───────▼───────┐ ┌───────▼───────┐  │ Parse + SEO │ │ OG Image │  │ Field Extract │ │ Generation │  └───────┬───────┘ └───────────────┘  │  ┌───────▼───────┐  │ Supabase │  │ INSERT │  └───────┬───────┘  │  ┌──────────┼──────────┐  │ │ │  ┌─────▼──┐ ┌────▼───┐ ┌───▼────┐  │IndexNow│ │Social │ │Telegram│  │ Ping │ │ Posts │ │ Notify │  └────────┘ └────────┘ └────────┘

Enter fullscreen mode

Exit fullscreen mode

Step 1: The Topic Bank

I don't let the AI decide what to write about. I maintain a topic_bank table in Supabase with pre-planned topics:

CREATE TABLE topic_bank (  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),  title TEXT NOT NULL,  dimension TEXT, -- mind, body, heart, wealth, spirit  concept TEXT,  hook TEXT,  target_audience TEXT,  content_type_slug TEXT,  status TEXT DEFAULT 'available',  used_at TIMESTAMPTZ );

Enter fullscreen mode

Exit fullscreen mode

A Postgres function get_next_topic_for_generation() picks the next available topic, marks it as processing, and returns it. This prevents duplicate generation if two runs overlap.

Step 2: The Prompt Engineering

This is where most AI content pipelines fall apart. They use a generic "write an article about X" prompt and get generic garbage back. My prompt is ~16,000 characters and includes:

Brand voice rules — no AI-isms ("delve," "unleash," "game-changer"), no filler paragraphs, every claim needs a specific number or citation.

104 calculator links organized by dimension — Claude weaves 3-8 relevant internal links naturally into each article:

const BODY_CALCS = [  '- [TDEE Calculator](https://catalystproject.ai/calculators/body/tdee) — daily energy expenditure',  '- [Macro Calculator](https://catalystproject.ai/calculators/body/macros) — protein/carb/fat targets',  // ... 27 more ];

Enter fullscreen mode

Exit fullscreen mode

Featured snippet optimization — question-format headers, numbered lists, bold definitions. These are the patterns Google pulls for position zero.

Structured output format — the prompt requires 13 labeled sections (meta description, subtitle, keywords, hook, problem statement, main content, key takeaways, action steps, success metrics, time to results, evidence level, sources, primary action). The parser extracts each one into its own database column.

Step 3: Content Quality Enforcement

The generation node uses Claude Sonnet 4 with temperature: 0.7. After generation, a Code node parses the response and validates:

// Extract all 13 sections via regex const metaMatch = text.match(/## Meta Description\n([\s\S]*?)(?=\n## )/); const keywordsMatch = text.match(/## Keywords\n([\s\S]*?)(?=\n## )/); // ... etc

// Quality checks const wordCount = mainContent.split(/\s+/).length; const hasInternalLinks = (mainContent.match(/catalystproject.ai/g) || []).length; const hasSpecificData = /\d+%|\d+ (study|studies|participants|patients)/.test(mainContent);

if (wordCount < 800 || hasInternalLinks < 3 || !hasSpecificData) { throw new Error('Quality check failed'); }`

Enter fullscreen mode

Exit fullscreen mode

Every article gets a featured_image_url set at birth via a dynamic OG image endpoint:

/api/og?title={title}&subtitle={dimension}

Enter fullscreen mode

Exit fullscreen mode

Step 4: SEO That Actually Works

Each article is stored with structured metadata that the Next.js page consumes:

// Automatic on every article page

// OpenGraph publishedTime, modifiedTime, section, tags, authors

// Robots max-image-preview: large, max-snippet: -1`

Enter fullscreen mode

Exit fullscreen mode

The sitemap regenerates with articles at priority 0.85. An IndexNow endpoint pings Bing, Yandex, and Seznam within seconds of publication:

// POST /api/indexnow const urls = body.urls.map(u => 
https://catalystproject.ai${u}`); await fetch(https://api.indexnow.org/indexnow, { method: 'POST', body: JSON.stringify({ host: 'catalystproject.ai', key, urlList: urls }), });`

Enter fullscreen mode

Exit fullscreen mode

Step 5: Social Distribution

After the article is inserted, a separate workflow generates platform-specific social posts (Twitter, LinkedIn) and stores them in a content_pieces table. A scheduled LinkedIn Publisher workflow picks up unposted pieces and publishes them via the LinkedIn REST API.

Step 6: Telegram Notification

Every generated article triggers a Telegram message with title, word count, internal link count, and a direct link to the published article. I review every piece even though it's automated — quality control matters.

Results After 3 Months

  • 206 published articles across 5 dimensions

  • 106 calculator pages with dimension-aware CTAs on every article

  • Structured data on every page (Article, BreadcrumbList, FAQPage, ProfessionalService)

  • 4 articles/day with zero manual writing

  • Average article: 1,200-1,800 words with 4-6 internal links each

What I'd Do Differently

Start with Google Search Console on day one. I built 200+ articles before submitting my sitemap. Those articles sat unindexed for weeks. Submit your sitemap before you have content — Google will discover new pages as they appear.

Don't trust temperature: 1.0 for production content. Higher temperatures produce more creative writing but also more hallucinated citations and inconsistent formatting. 0.7 is the sweet spot for reliable, parseable output.

Internal linking is an architectural decision, not a content decision. Embedding all 104 calculator URLs into the system prompt means every article links to relevant tools without the AI needing to "remember" them. The linking happens at the prompt level, not the content level.

The Full Stack

The entire platform runs on:

  • Next.js 15 + React 19 + TypeScript on Vercel

  • Supabase (PostgreSQL + pgvector for RAG)

  • n8n Cloud (7 workflows: content gen, social posting, lead scraping, enrichment, email drafting, pipeline snapshots, LinkedIn publishing)

  • Claude API for content generation and AI chat

  • Stripe for subscriptions

  • Resend for email

Total monthly infrastructure cost: ~$150. No employees. One codebase.

If you're building something similar or want to see the calculators and content in action, check out catalystproject.ai. The consulting page has details on how I build these systems for other businesses.

Happy to answer questions about the architecture, prompt engineering, or n8n workflow design in the comments.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

claudeavailableproduct

Knowledge Map

Knowledge Map
TopicsEntitiesSource
I Built an …claudeavailableproductplatformservicefeatureDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 188 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products