Introducing HCEL: The Most Fluent Way to Build AI Pipelines in TypeScript
<p>In the rapidly evolving landscape of AI development, orchestration is everything. As developers move from simple LLM calls to complex, multi-step agentic workflows, the need for a clean, expressive, and type-safe way to define these pipelines becomes critical.</p> <p>Today, we are excited to introduce <strong>HCEL (HazelJS Composable Expression Language)</strong>—a fluent, TypeScript-native DSL designed to make AI orchestration as intuitive as a standard functional chain.</p> <h2> What is HCEL? </h2> <p>HCEL stands for <strong>HazelJS Composable Expression Language</strong>. It is not a separate language you need to learn, but a fluent API provided by the <code>@hazeljs/ai</code> package. It allows you to "chain" together different AI capabilities—prompts, RAG searches, agents, and mach
In the rapidly evolving landscape of AI development, orchestration is everything. As developers move from simple LLM calls to complex, multi-step agentic workflows, the need for a clean, expressive, and type-safe way to define these pipelines becomes critical.
Today, we are excited to introduce HCEL (HazelJS Composable Expression Language)—a fluent, TypeScript-native DSL designed to make AI orchestration as intuitive as a standard functional chain.
What is HCEL?
HCEL stands for HazelJS Composable Expression Language. It is not a separate language you need to learn, but a fluent API provided by the @hazeljs/ai package. It allows you to "chain" together different AI capabilities—prompts, RAG searches, agents, and machine learning models—into a single, executable pipeline.
Why HCEL?
Traditional AI pipelines often suffer from "pyramid of doom" callback structures or messy async/await boilerplate for passing state between steps. HCEL solves this by providing:
-
Fluent Method Chaining: Build complex logic step-by-step.
-
Implicit Context Passing: The output of operation A automatically becomes the input for operation B.
-
Observability by Default: Every step in the chain is automatically traced and timed.
-
Flow Integration: Easily convert any HCEL chain into a durable @hazeljs/flow node.
Real-World Examples from Our Codebase
Let's explore actual working examples from our @hazeljs/ai/examples directory that demonstrate HCEL in action.
Example 1: Simple HCEL Chain
From hcel-demo.ts:
// Simple HCEL Chain const simpleResult = await ai.hazel .prompt('What is HazelJS?') .execute() as string;// Simple HCEL Chain const simpleResult = await ai.hazel .prompt('What is HazelJS?') .execute() as string;console.log(Result: ${simpleResult.slice(0, 100)}...);`
Enter fullscreen mode
Exit fullscreen mode
What's happening here?
-
We start with ai.hazel - the entry point for HCEL chains
-
.prompt() creates a text generation operation
-
.execute() runs the entire chain and returns the result
-
The result is automatically typed as string thanks to TypeScript inference
Example 2: RAG + ML Chain
From hcel-demo.ts:
// RAG + ML Chain const ragMlResult = await ai.hazel .rag('docs') // Search documentation .ml('sentiment') // Analyze sentiment .execute() as SentimentResult;// RAG + ML Chain const ragMlResult = await ai.hazel .rag('docs') // Search documentation .ml('sentiment') // Analyze sentiment .execute() as SentimentResult;console.log(Sentiment: ${ragMlResult.sentiment} (${ragMlResult.score}));`
Enter fullscreen mode
Exit fullscreen mode
What's happening here?
-
.rag('docs') searches your documentation for relevant context
-
The RAG result is automatically passed to the next step
-
.ml('sentiment') runs sentiment analysis on the RAG output
-
The final result includes both sentiment and confidence score
Example 3: Streaming Chain
From hcel-demo.ts:
// Streaming Chain console.log('Assistant: '); for await (const chunk of ai.hazel .prompt('Tell me a short story about AI and creativity') .stream()) { process.stdout.write(chunk as string); }// Streaming Chain console.log('Assistant: '); for await (const chunk of ai.hazel .prompt('Tell me a short story about AI and creativity') .stream()) { process.stdout.write(chunk as string); }Enter fullscreen mode
Exit fullscreen mode
What's happening here?
-
.stream() instead of .execute() enables real-time streaming
-
Perfect for chat interfaces or long-running generations
-
Each chunk is processed as it arrives from the AI provider
Example 4: Chain with Context & Observability
From hcel-demo.ts:
const contextChain = ai.hazel .prompt('Analyze this user feedback: {feedback}') .ml('sentiment') .context({ userId: 'user-123', sessionId: 'session-456' }) .observe((event) => { console.log(const contextChain = ai.hazel .prompt('Analyze this user feedback: {feedback}') .ml('sentiment') .context({ userId: 'user-123', sessionId: 'session-456' }) .observe((event) => { console.log(); });); });const contextResult = await contextChain.execute() as SentimentResult;`
Enter fullscreen mode
Exit fullscreen mode
What's happening here?
-
.context() adds metadata that flows through the entire chain
-
.observe() hooks into every operation for logging and monitoring
-
Perfect for debugging and production observability
Production-Ready Features
HCEL isn't just for demos—it's built for production workloads. Let's look at our production example:
Example 5: Persistent HCEL Chains
From hcel-production-demo.ts:
// Production HazelAI with Persistence Configuration const ai = HazelAI.create({ defaultProvider: 'openai', model: 'gpt-4o', temperature: 0.7,// Production HazelAI with Persistence Configuration const ai = HazelAI.create({ defaultProvider: 'openai', model: 'gpt-4o', temperature: 0.7,// Production persistence configuration persistence: { memory: { store: 'in-memory', // Change to 'postgres' or 'redis' for production ttl: 3600, // 1 hour }, rag: { vectorStore: 'in-memory', // Change to 'pinecone', 'qdrant', etc. for production options: { topK: 5, chunkSize: 1000, chunkOverlap: 200, } }, chains: { store: 'in-memory', // Change to 'postgres' or 'redis' for production ttl: 7200, // 2 hours } } });
// Create a persistent analysis chain const analysisChain = ai.hazel .prompt('Analyze user feedback: This product is amazing! It works perfectly.') .persist('user-feedback-analysis') // Persist this chain .cache(1800); // Cache results for 30 minutes
const result = await analysisChain.execute();`
Enter fullscreen mode
Exit fullscreen mode
Production Features:
-
Persistence: Chain state is automatically saved to Redis/Postgres
-
Caching: Results are cached to avoid redundant API calls
-
Configuration: Production-ready settings for stores and TTLs
Example 6: Parallel Operations
From hcel-demo.ts:
// Parallel Operations const parallelChain = ai.hazel .parallel( ai.hazel.prompt('Summarize: "AI is transforming the world"'), ai.hazel.ml('sentiment', { labels: ['positive', 'negative', 'neutral'] }) );// Parallel Operations const parallelChain = ai.hazel .parallel( ai.hazel.prompt('Summarize: "AI is transforming the world"'), ai.hazel.ml('sentiment', { labels: ['positive', 'negative', 'neutral'] }) );const parallelResult = await parallelChain.execute();`
Enter fullscreen mode
Exit fullscreen mode
What's happening here?
-
.parallel() executes multiple operations simultaneously
-
Perfect for independent tasks that can run concurrently
-
Results are collected and returned as an array
Advanced Orchestration Patterns
Example 7: Flow Engine Integration
From hcel-flow-demo.ts:
// HCEL-Flow Bridge - Wraps HCEL chains as Flow Engine nodes class HCELFlowNode { constructor(private chain: any) {}// HCEL-Flow Bridge - Wraps HCEL chains as Flow Engine nodes class HCELFlowNode { constructor(private chain: any) {}async execute(input: unknown): Promise { try { const result = await this.chain.execute(input); return { status: 'ok', output: result }; } catch (error) { return { status: 'error', reason: error instanceof Error ? error.message : 'Unknown error' }; } } }
// Convert HCEL chain to Flow node const hcelNode = new HCELFlowNode( ai.hazel .prompt('Process user request: {{input}}') .rag('knowledge-base') .agent('support-specialist') );`
Enter fullscreen mode
Exit fullscreen mode
What's happening here?
-
HCEL chains can be wrapped as Flow Engine nodes
-
Enables durable, long-running workflows with AI steps
-
Perfect for human-in-the-loop processes and complex business logic
How to Run These Examples
All examples are available in our packages/ai/examples directory:
Setup
# From the packages/ai directory npm run build# From the packages/ai directory npm run buildSet your API key
export OPENAI_API_KEY=your-key-here`
Enter fullscreen mode
Exit fullscreen mode
Run the Examples
# Basic HCEL demo node dist/examples/hcel-demo.js# Basic HCEL demo node dist/examples/hcel-demo.jsProduction features demo
node dist/examples/hcel-production-demo.js
Flow integration demo
node dist/examples/hcel-flow-demo.js`
Enter fullscreen mode
Exit fullscreen mode
What Each Example Demonstrates
-
hcel-demo.ts - Basic HCEL operations, streaming, parallel execution
-
hcel-production-demo.ts - Persistence, caching, memory management
-
hcel-flow-demo.ts - Integration with HazelJS Flow Engine
-
simple-demo.ts - Works without API keys for testing
-
unified-platform-example.ts - Complete platform showcase
The Power of Implicit Context
The core magic of HCEL is the Implicit Context. Every step in the builder returns a new state that includes a pipe. The pipe is what transforms the output of Operation N into the input of Operation N+1.
Explicit vs Implicit Context
// ❌ Traditional approach - explicit context passing const summary = await ai.chat('Summarize: ' + userQuery); const context = await ai.rag('docs', summary); const analysis = await ai.agent('analyst', context);// ❌ Traditional approach - explicit context passing const summary = await ai.chat('Summarize: ' + userQuery); const context = await ai.rag('docs', summary); const analysis = await ai.agent('analyst', context);// ✅ HCEL approach - implicit context passing const result = await ai.hazel .prompt('Summarize: {{input}}') .rag('docs') .agent('analyst') .execute(userQuery);`
Enter fullscreen mode
Exit fullscreen mode
Context Transformation
// Each step transforms the context const pipeline = ai.hazel .prompt('Extract key topics: {{input}}') // string → string .ml('classify', { labels: ['tech', 'business', 'other'] }) // string → ClassificationResult .conditional((result) => result.label === 'tech') // ClassificationResult → boolean .prompt('Explain this tech topic: {{input}}') // ClassificationResult → string .execute('AI is revolutionizing software development');// Each step transforms the context const pipeline = ai.hazel .prompt('Extract key topics: {{input}}') // string → string .ml('classify', { labels: ['tech', 'business', 'other'] }) // string → ClassificationResult .conditional((result) => result.label === 'tech') // ClassificationResult → boolean .prompt('Explain this tech topic: {{input}}') // ClassificationResult → string .execute('AI is revolutionizing software development');Enter fullscreen mode
Exit fullscreen mode
Type Safety and IntelliSense
Because HCEL is built with TypeScript, you get full type safety and IDE support:
// TypeScript knows the return type based on the last operation const sentimentResult = await ai.hazel .prompt('Analyze: {{input}}') .ml('sentiment') .execute() as SentimentResult; // Type is inferred!// TypeScript knows the return type based on the last operation const sentimentResult = await ai.hazel .prompt('Analyze: {{input}}') .ml('sentiment') .execute() as SentimentResult; // Type is inferred!// Get full chain summary with types const summary = ai.hazel .prompt('Test') .ml('sentiment') .getSummary();
console.log(summary.operations); // ['prompt', 'ml'] console.log(summary.config); // Chain configuration`
Enter fullscreen mode
Exit fullscreen mode
Getting Started with HCEL
Installation
npm install @hazeljs/ai @hazeljs/core
Enter fullscreen mode
Exit fullscreen mode
Basic Usage
import { HazelAI } from '@hazeljs/ai';
const ai = HazelAI.create({ defaultProvider: 'openai', model: 'gpt-4o' });
// Your first HCEL chain const result = await ai.hazel .prompt('What is the future of AI?') .execute();`
Enter fullscreen mode
Exit fullscreen mode
Advanced Usage
// Production-ready chain const pipeline = ai.hazel .persist('user-analysis') .prompt('Analyze user feedback: {{input}}') .rag('product-docs') .ml('sentiment') .cache(3600) .observe((event) => console.log(event));// Production-ready chain const pipeline = ai.hazel .persist('user-analysis') .prompt('Analyze user feedback: {{input}}') .rag('product-docs') .ml('sentiment') .cache(3600) .observe((event) => console.log(event));Enter fullscreen mode
Exit fullscreen mode
What's Next?
HCEL is just the beginning. We're working on:
-
More ML Operations: Classification, extraction, translation
-
Advanced Flow Patterns: Conditional branching, loops, retries
-
Enhanced Observability: OpenTelemetry integration, custom metrics
-
Visual Builder: Web-based HCEL chain designer
-
Template Library: Pre-built chains for common use cases
Join the HCEL Community
-
GitHub: hazeljs/hazeljs
-
Documentation: hazeljs.com/docs
-
Examples: packages/ai/examples
-
Discord: Join our developer community
Conclusion
HCEL represents a fundamental shift in how we think about AI orchestration. By providing a fluent, type-safe, and production-ready API, we're making sophisticated AI workflows accessible to every TypeScript developer.
Whether you're building simple chatbots or complex multi-agent systems, HCEL provides the tools you need to compose, observe, and scale your AI operations with confidence.
Try HCEL today and experience the future of AI orchestration in TypeScript! 🚀 in .prompt() tells HCEL exactly where to inject the previous step's result.
- Auto-Injection: Operations like .rag() or .agent() automatically use the current context as their search query or instruction if no specific input is provided.
This allows you to focus on the logic, not the data-shuffling boilerplate.
Real-World Case Study: Automated Support Triage
Here is a full implementation of an automated support triage system built with HCEL:
@Service() export class SupportTriage { constructor(private readonly ai: AIEnhancedService) {}@Service() export class SupportTriage { constructor(private readonly ai: AIEnhancedService) {}async handleTicket(ticketText: string) { return this.ai.hazel .persist('support-triage') .ml('sentiment') .parallel( this.ai.hazel.ml('classify', { categories: ['billing', 'technical', 'sales'] }), this.ai.hazel.prompt('Extract product names from: {{input}}') ) .conditional((context) => context.sentiment === 'negative' && context.classification === 'technical') .agent('SeniorTechnicalSupport') .conditional((context) => context.classification === 'billing') .rag('billing-faq') .prompt('Answer billing query: {{input}}') .execute(ticketText); } }`
Enter fullscreen mode
Exit fullscreen mode
This single method replaces what would traditionally be dozens of lines of nested if statements, manual RAG lookups, and complex state management.
Try it Today
HCEL is available now in @hazeljs/ai version v0.7.0+.
Whether you are building a simple chat bot or a massively parallel research engine, HCEL provides the expressive power you need without the boilerplate.
Resources:
-
HCEL Guide — Full API reference and advanced patterns.
-
AI Package Documentation — Getting started with AI in HazelJS.
-
Agentic RAG Guide — Learn how to pair HCEL with advanced retrieval strategies.
Happy coding with HazelJS!
DEV Community
https://dev.to/arslan_mecom/introducing-hcel-the-most-fluent-way-to-build-ai-pipelines-in-typescript-38baSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelavailableversion
Gemma 4 has been released
https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF https://huggingface.co/unsloth/gemma-4-31B-it-GGUF https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF https://huggingface.co/collections/google/gemma-4 What’s new in Gemma 4 https://www.youtube.com/watch?v=jZVBoFOJK-Q Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages. Featuring both Dense and Mixture-of-Experts (MoE) architectures, Gemma 4 is well-s

Gemma 4 1B, 13B, and 27B spotted
[Gemma 4](INSET_PAPER_LINK) is a multimodal model with pretrained and instruction-tuned variants, available in 1B, 13B, and 27B parameters. The architecture is mostly the same as the previous Gemma versions. The key differences are a vision processor that can output images of fixed token budget and a spatial 2D RoPE to encode vision-specific information across height and width axis. You can find all the original Gemma 4 checkpoints under the [Gemma 4]( https://huggingface.co/collections/google/gemma-4-release-67c6c6f89c4f76621268bb6d ) release. submitted by /u/TKGaming_11 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

AI s great paradox: The industry s rise and investors collapse
The AI industry faces a paradox, promising transformational advances while investors risk substantial losses due to limitations of current technologies and potential quantum breakthroughs. The post AI’s great paradox: The industry’s rise and investors’ collapse first appeared on TechTalks .

It’s not easy to get depression-detecting AI through the FDA
For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person's speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life [ ]



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!