Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessThis International Fact-Checking Day, use these 5 tips to spot AI-generated contentFast Company TechNew AI testing method flags fairness risks in autonomous systemsTechXplore AIA Differentiable Programming System to Bridge Machine Learning and ScientificComputingDev.to AIWhy Your AI Copilot Builds the Wrong Thing (And How to Fix It)Dev.to AIBuilding Sentinel Gate: A 3-Layer Security Pipeline for AI AgentsDev.to AIHow I Built a Self-Healing Memory System for AI AgentsDev.to AIAXIOM Week 2 Wrap — 65 Articles, 73 Cold Emails, and a New Agent on the NetworkDev.to AIA Feature I Never Planned Emerged From Persona Interviews — Here's Exactly HowDev.to AIGemma 4 has been releasedReddit r/LocalLLaMAThe Algorithmic Edge: Launching Your Day Trading Journey with AI Sentiment and Next-Gen ChartingDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIGemma 4: Byte for byte, the most capable open modelsGoogle DeepMindBlack Hat USADark ReadingBlack Hat AsiaAI BusinessThis International Fact-Checking Day, use these 5 tips to spot AI-generated contentFast Company TechNew AI testing method flags fairness risks in autonomous systemsTechXplore AIA Differentiable Programming System to Bridge Machine Learning and ScientificComputingDev.to AIWhy Your AI Copilot Builds the Wrong Thing (And How to Fix It)Dev.to AIBuilding Sentinel Gate: A 3-Layer Security Pipeline for AI AgentsDev.to AIHow I Built a Self-Healing Memory System for AI AgentsDev.to AIAXIOM Week 2 Wrap — 65 Articles, 73 Cold Emails, and a New Agent on the NetworkDev.to AIA Feature I Never Planned Emerged From Persona Interviews — Here's Exactly HowDev.to AIGemma 4 has been releasedReddit r/LocalLLaMAThe Algorithmic Edge: Launching Your Day Trading Journey with AI Sentiment and Next-Gen ChartingDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIGemma 4: Byte for byte, the most capable open modelsGoogle DeepMind
AI NEWS HUBbyEIGENVECTOREigenvector

Introducing HCEL: The Most Fluent Way to Build AI Pipelines in TypeScript

DEV Communityby Muhammad ArslanApril 1, 20269 min read2 views
Source Quiz

<p>In the rapidly evolving landscape of AI development, orchestration is everything. As developers move from simple LLM calls to complex, multi-step agentic workflows, the need for a clean, expressive, and type-safe way to define these pipelines becomes critical.</p> <p>Today, we are excited to introduce <strong>HCEL (HazelJS Composable Expression Language)</strong>—a fluent, TypeScript-native DSL designed to make AI orchestration as intuitive as a standard functional chain.</p> <h2> What is HCEL? </h2> <p>HCEL stands for <strong>HazelJS Composable Expression Language</strong>. It is not a separate language you need to learn, but a fluent API provided by the <code>@hazeljs/ai</code> package. It allows you to "chain" together different AI capabilities—prompts, RAG searches, agents, and mach

In the rapidly evolving landscape of AI development, orchestration is everything. As developers move from simple LLM calls to complex, multi-step agentic workflows, the need for a clean, expressive, and type-safe way to define these pipelines becomes critical.

Today, we are excited to introduce HCEL (HazelJS Composable Expression Language)—a fluent, TypeScript-native DSL designed to make AI orchestration as intuitive as a standard functional chain.

What is HCEL?

HCEL stands for HazelJS Composable Expression Language. It is not a separate language you need to learn, but a fluent API provided by the @hazeljs/ai package. It allows you to "chain" together different AI capabilities—prompts, RAG searches, agents, and machine learning models—into a single, executable pipeline.

Why HCEL?

Traditional AI pipelines often suffer from "pyramid of doom" callback structures or messy async/await boilerplate for passing state between steps. HCEL solves this by providing:

  • Fluent Method Chaining: Build complex logic step-by-step.

  • Implicit Context Passing: The output of operation A automatically becomes the input for operation B.

  • Observability by Default: Every step in the chain is automatically traced and timed.

  • Flow Integration: Easily convert any HCEL chain into a durable @hazeljs/flow node.

Real-World Examples from Our Codebase

Let's explore actual working examples from our @hazeljs/ai/examples directory that demonstrate HCEL in action.

Example 1: Simple HCEL Chain

From hcel-demo.ts:

// Simple HCEL Chain const simpleResult = await ai.hazel  .prompt('What is HazelJS?')  .execute() as string;

console.log(Result: ${simpleResult.slice(0, 100)}...);`

Enter fullscreen mode

Exit fullscreen mode

What's happening here?

  • We start with ai.hazel - the entry point for HCEL chains

  • .prompt() creates a text generation operation

  • .execute() runs the entire chain and returns the result

  • The result is automatically typed as string thanks to TypeScript inference

Example 2: RAG + ML Chain

From hcel-demo.ts:

// RAG + ML Chain const ragMlResult = await ai.hazel  .rag('docs') // Search documentation  .ml('sentiment') // Analyze sentiment  .execute() as SentimentResult;

console.log(Sentiment: ${ragMlResult.sentiment} (${ragMlResult.score}));`

Enter fullscreen mode

Exit fullscreen mode

What's happening here?

  • .rag('docs') searches your documentation for relevant context

  • The RAG result is automatically passed to the next step

  • .ml('sentiment') runs sentiment analysis on the RAG output

  • The final result includes both sentiment and confidence score

Example 3: Streaming Chain

From hcel-demo.ts:

// Streaming Chain console.log('Assistant: '); for await (const chunk of ai.hazel  .prompt('Tell me a short story about AI and creativity')  .stream()) {  process.stdout.write(chunk as string); }

Enter fullscreen mode

Exit fullscreen mode

What's happening here?

  • .stream() instead of .execute() enables real-time streaming

  • Perfect for chat interfaces or long-running generations

  • Each chunk is processed as it arrives from the AI provider

Example 4: Chain with Context & Observability

From hcel-demo.ts:

const contextChain = ai.hazel  .prompt('Analyze this user feedback: {feedback}')  .ml('sentiment')  .context({ userId: 'user-123', sessionId: 'session-456' })  .observe((event) => {  console.log(
📡 Event: ${event.type} at ${new Date(event.timestamp).toISOString()}
);  });

const contextResult = await contextChain.execute() as SentimentResult;`

Enter fullscreen mode

Exit fullscreen mode

What's happening here?

  • .context() adds metadata that flows through the entire chain

  • .observe() hooks into every operation for logging and monitoring

  • Perfect for debugging and production observability

Production-Ready Features

HCEL isn't just for demos—it's built for production workloads. Let's look at our production example:

Example 5: Persistent HCEL Chains

From hcel-production-demo.ts:

// Production HazelAI with Persistence Configuration const ai = HazelAI.create({  defaultProvider: 'openai',  model: 'gpt-4o',  temperature: 0.7,

// Production persistence configuration persistence: { memory: { store: 'in-memory', // Change to 'postgres' or 'redis' for production ttl: 3600, // 1 hour }, rag: { vectorStore: 'in-memory', // Change to 'pinecone', 'qdrant', etc. for production options: { topK: 5, chunkSize: 1000, chunkOverlap: 200, } }, chains: { store: 'in-memory', // Change to 'postgres' or 'redis' for production ttl: 7200, // 2 hours } } });

// Create a persistent analysis chain const analysisChain = ai.hazel .prompt('Analyze user feedback: This product is amazing! It works perfectly.') .persist('user-feedback-analysis') // Persist this chain .cache(1800); // Cache results for 30 minutes

const result = await analysisChain.execute();`

Enter fullscreen mode

Exit fullscreen mode

Production Features:

  • Persistence: Chain state is automatically saved to Redis/Postgres

  • Caching: Results are cached to avoid redundant API calls

  • Configuration: Production-ready settings for stores and TTLs

Example 6: Parallel Operations

From hcel-demo.ts:

// Parallel Operations const parallelChain = ai.hazel  .parallel(  ai.hazel.prompt('Summarize: "AI is transforming the world"'),  ai.hazel.ml('sentiment', { labels: ['positive', 'negative', 'neutral'] })  );

const parallelResult = await parallelChain.execute();`

Enter fullscreen mode

Exit fullscreen mode

What's happening here?

  • .parallel() executes multiple operations simultaneously

  • Perfect for independent tasks that can run concurrently

  • Results are collected and returned as an array

Advanced Orchestration Patterns

Example 7: Flow Engine Integration

From hcel-flow-demo.ts:

// HCEL-Flow Bridge - Wraps HCEL chains as Flow Engine nodes class HCELFlowNode {  constructor(private chain: any) {}

async execute(input: unknown): Promise { try { const result = await this.chain.execute(input); return { status: 'ok', output: result }; } catch (error) { return { status: 'error', reason: error instanceof Error ? error.message : 'Unknown error' }; } } }

// Convert HCEL chain to Flow node const hcelNode = new HCELFlowNode( ai.hazel .prompt('Process user request: {{input}}') .rag('knowledge-base') .agent('support-specialist') );`

Enter fullscreen mode

Exit fullscreen mode

What's happening here?

  • HCEL chains can be wrapped as Flow Engine nodes

  • Enables durable, long-running workflows with AI steps

  • Perfect for human-in-the-loop processes and complex business logic

How to Run These Examples

All examples are available in our packages/ai/examples directory:

Setup

# From the packages/ai directory npm run build

Set your API key

export OPENAI_API_KEY=your-key-here`

Enter fullscreen mode

Exit fullscreen mode

Run the Examples

# Basic HCEL demo node dist/examples/hcel-demo.js

Production features demo

node dist/examples/hcel-production-demo.js

Flow integration demo

node dist/examples/hcel-flow-demo.js`

Enter fullscreen mode

Exit fullscreen mode

What Each Example Demonstrates

  • hcel-demo.ts - Basic HCEL operations, streaming, parallel execution

  • hcel-production-demo.ts - Persistence, caching, memory management

  • hcel-flow-demo.ts - Integration with HazelJS Flow Engine

  • simple-demo.ts - Works without API keys for testing

  • unified-platform-example.ts - Complete platform showcase

The Power of Implicit Context

The core magic of HCEL is the Implicit Context. Every step in the builder returns a new state that includes a pipe. The pipe is what transforms the output of Operation N into the input of Operation N+1.

Explicit vs Implicit Context

// ❌ Traditional approach - explicit context passing const summary = await ai.chat('Summarize: ' + userQuery); const context = await ai.rag('docs', summary); const analysis = await ai.agent('analyst', context);

// ✅ HCEL approach - implicit context passing const result = await ai.hazel .prompt('Summarize: {{input}}') .rag('docs') .agent('analyst') .execute(userQuery);`

Enter fullscreen mode

Exit fullscreen mode

Context Transformation

// Each step transforms the context const pipeline = ai.hazel  .prompt('Extract key topics: {{input}}') // string → string  .ml('classify', { labels: ['tech', 'business', 'other'] }) // string → ClassificationResult  .conditional((result) => result.label === 'tech') // ClassificationResult → boolean  .prompt('Explain this tech topic: {{input}}') // ClassificationResult → string  .execute('AI is revolutionizing software development');

Enter fullscreen mode

Exit fullscreen mode

Type Safety and IntelliSense

Because HCEL is built with TypeScript, you get full type safety and IDE support:

// TypeScript knows the return type based on the last operation const sentimentResult = await ai.hazel  .prompt('Analyze: {{input}}')  .ml('sentiment')  .execute() as SentimentResult; // Type is inferred!

// Get full chain summary with types const summary = ai.hazel .prompt('Test') .ml('sentiment') .getSummary();

console.log(summary.operations); // ['prompt', 'ml'] console.log(summary.config); // Chain configuration`

Enter fullscreen mode

Exit fullscreen mode

Getting Started with HCEL

Installation

npm install @hazeljs/ai @hazeljs/core

Enter fullscreen mode

Exit fullscreen mode

Basic Usage

import { HazelAI } from '@hazeljs/ai';

const ai = HazelAI.create({ defaultProvider: 'openai', model: 'gpt-4o' });

// Your first HCEL chain const result = await ai.hazel .prompt('What is the future of AI?') .execute();`

Enter fullscreen mode

Exit fullscreen mode

Advanced Usage

// Production-ready chain const pipeline = ai.hazel  .persist('user-analysis')  .prompt('Analyze user feedback: {{input}}')  .rag('product-docs')  .ml('sentiment')  .cache(3600)  .observe((event) => console.log(event));

Enter fullscreen mode

Exit fullscreen mode

What's Next?

HCEL is just the beginning. We're working on:

  • More ML Operations: Classification, extraction, translation

  • Advanced Flow Patterns: Conditional branching, loops, retries

  • Enhanced Observability: OpenTelemetry integration, custom metrics

  • Visual Builder: Web-based HCEL chain designer

  • Template Library: Pre-built chains for common use cases

Join the HCEL Community

  • GitHub: hazeljs/hazeljs

  • Documentation: hazeljs.com/docs

  • Examples: packages/ai/examples

  • Discord: Join our developer community

Conclusion

HCEL represents a fundamental shift in how we think about AI orchestration. By providing a fluent, type-safe, and production-ready API, we're making sophisticated AI workflows accessible to every TypeScript developer.

Whether you're building simple chatbots or complex multi-agent systems, HCEL provides the tools you need to compose, observe, and scale your AI operations with confidence.

Try HCEL today and experience the future of AI orchestration in TypeScript! 🚀 in .prompt() tells HCEL exactly where to inject the previous step's result.

  • Auto-Injection: Operations like .rag() or .agent() automatically use the current context as their search query or instruction if no specific input is provided.

This allows you to focus on the logic, not the data-shuffling boilerplate.

Real-World Case Study: Automated Support Triage

Here is a full implementation of an automated support triage system built with HCEL:

@Service() export class SupportTriage {  constructor(private readonly ai: AIEnhancedService) {}

async handleTicket(ticketText: string) { return this.ai.hazel .persist('support-triage') .ml('sentiment') .parallel( this.ai.hazel.ml('classify', { categories: ['billing', 'technical', 'sales'] }), this.ai.hazel.prompt('Extract product names from: {{input}}') ) .conditional((context) => context.sentiment === 'negative' && context.classification === 'technical') .agent('SeniorTechnicalSupport') .conditional((context) => context.classification === 'billing') .rag('billing-faq') .prompt('Answer billing query: {{input}}') .execute(ticketText); } }`

Enter fullscreen mode

Exit fullscreen mode

This single method replaces what would traditionally be dozens of lines of nested if statements, manual RAG lookups, and complex state management.

Try it Today

HCEL is available now in @hazeljs/ai version v0.7.0+.

Whether you are building a simple chat bot or a massively parallel research engine, HCEL provides the expressive power you need without the boilerplate.

Resources:

  • HCEL Guide — Full API reference and advanced patterns.

  • AI Package Documentation — Getting started with AI in HazelJS.

  • Agentic RAG Guide — Learn how to pair HCEL with advanced retrieval strategies.

Happy coding with HazelJS!

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelavailableversion

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Introducing…modelavailableversionproductplatformserviceDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 180 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!