I am building a Notebook Environment for SQL Inside a Database Client
This post is also available on tabularis.dev . You know the drill. Write a query, get a table. Need to build on that result? Copy-paste into the next query. Need a chart? Export CSV, open a spreadsheet. Want to document the analysis? Paste SQL into a doc and pray nothing drifts. I got tired of this loop, so I'm building Notebooks into Tabularis — a cell-based SQL analysis environment that lives inside the database client. No Jupyter, no Python runtime, no context switching. Just SQL + markdown cells, inline charts, and a few features that make multi-query analysis way less painful. It's still in development, but the core works. Here's what it looks like and how it's shaping up. How It Works A notebook is a sequence of cells — SQL or markdown. SQL cells run against your database and show re
This post is also available on tabularis.dev.
You know the drill. Write a query, get a table. Need to build on that result? Copy-paste into the next query. Need a chart? Export CSV, open a spreadsheet. Want to document the analysis? Paste SQL into a doc and pray nothing drifts.
I got tired of this loop, so I'm building Notebooks into Tabularis — a cell-based SQL analysis environment that lives inside the database client. No Jupyter, no Python runtime, no context switching. Just SQL + markdown cells, inline charts, and a few features that make multi-query analysis way less painful.
It's still in development, but the core works. Here's what it looks like and how it's shaping up.
How It Works
A notebook is a sequence of cells — SQL or markdown. SQL cells run against your database and show results inline with the same data grid from the query editor (sorting, filtering, resizable panels). Markdown cells are for documentation between queries.
Cell References via CTEs
This is the part I'm most excited about.
Any SQL cell can reference another cell's query with {{cell_N}}. At execution time, it gets resolved as a CTE:
-- Cell 1: Base query SELECT customer_id, SUM(amount) AS total FROM orders GROUP BY customer_id-- Cell 1: Base query SELECT customer_id, SUM(amount) AS total FROM orders GROUP BY customer_id-- Cell 3: References Cell 1 SELECT * FROM {{cell_1}} WHERE total > 1000`*
Enter fullscreen mode
Exit fullscreen mode
Becomes:
WITH cell_1 AS ( SELECT customer_id, SUM(amount) AS total FROM orders GROUP BY customer_id ) SELECT * FROM cell_1 WHERE total > 1000WITH cell_1 AS ( SELECT customer_id, SUM(amount) AS total FROM orders GROUP BY customer_id ) SELECT * FROM cell_1 WHERE total > 1000Enter fullscreen mode
Exit fullscreen mode
No temp tables, no copy-paste. Change the base query, re-run downstream cells, everything stays in sync. You can chain across multiple cells and every intermediate result stays visible.
Inline Charts
Any result with 2+ columns and at least one row can be charted — bar, line, or pie — directly in the cell. Pick a label column and value columns, done. Config is saved with the cell.
Not meant to replace BI tools. It's for when you're exploring and want a quick visual check before writing the next query.
Parameters
Define once, use everywhere:
@start_date = '2024-01-01' @end_date = '2024-12-31' @min_amount = 500@start_date = '2024-01-01' @end_date = '2024-12-31' @min_amount = 500Enter fullscreen mode
Exit fullscreen mode
Every SQL cell with @start_date gets it substituted before execution. Change the value, re-run — all queries pick it up. Great for monthly reports, cohort comparisons, anything where the logic stays the same but inputs change.
Parallel Execution
Not every cell depends on the previous one. Mark independent cells with the lightning bolt icon and they run concurrently during "Run All" instead of waiting in sequence. For notebooks with heavy queries against different tables, this makes a real difference.
Run All + Stop on Error
Ctrl+Shift+Enter runs every SQL cell top to bottom. Stop on Error controls whether it halts at the first failure or keeps going. After execution, a summary card shows succeeded/failed/skipped counts — click a failed cell to jump straight to it.
Multi-Database in One Notebook
Each SQL cell can target a different database connection. Pull from production PostgreSQL in one cell, compare with your analytics SQLite in the next. Works across MySQL, MariaDB, PostgreSQL, and SQLite.
Execution History
Every cell keeps its last 10 runs — timestamp, duration, row count. You can restore any previous query version. Useful when you've been iterating and need to go back.
AI Assist
Each SQL cell has AI and Explain buttons — describe what you want, get SQL back, or break down an existing query. There's also an auto-naming feature: click the sparkles icon and AI generates a cell name based on the content. Named cells show up in a notebook outline for navigation.
Organization
-
Collapse cells to show just headers
-
Drag and drop to reorder
-
Cell names (manual or AI-generated) for identity
-
Markdown cells as section dividers
Import / Export
-
.tabularis-notebook — JSON with cells, parameters, charts. No result data. Share it, import it, connect to a different DB, run it.
-
HTML export — self-contained document with rendered markdown, syntax highlighting, embedded result tables. Dark-themed.
-
Individual results export as CSV or JSON.
What's Not Done Yet
Being honest about rough edges:
-
Large notebooks (30+ cells) need better virtualization
-
Circular reference detection is missing — needs a dependency graph
-
Chart customization is minimal (no axis labels, no color palettes)
-
Keyboard navigation between cells is partially implemented
-
Notebook-level undo/redo doesn't exist yet (cell-level works via Monaco)
Why Build This?
Database clients haven't really evolved beyond "connect, query, see table." Analysis tooling moved forward — Jupyter, Observable, dbt — but the DB client stayed behind.
Notebooks in Tabularis bet that the database client is the right place for exploratory SQL analysis. You already have the connection, the schema, autocomplete, query history. Cells, charts, references and parameters on top of that means the whole workflow — first query to shareable report — happens without switching tools.
It's not a Jupyter replacement. No Python, no R. It's purpose-built for SQL, and for the kind of work most people actually do with their database every day — ad-hoc exploration, report building, data validation, performance investigation — that focus is a feature.
Landing soon. If you want to try it, check out Tabularis.
DEV Community
https://dev.to/debba/i-am-building-a-notebook-environment-for-sql-inside-a-database-client-22j7Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
availableversionproduct
Intel Arc B70 Benchmarks/Comparison to Nvidia RTX 4070 Super
Good day everyone! You may remember me from such posts as Getting An Intel Arc B70 Running For LLM Inference on a Dell Poweredge R730XD . Maybe not. Probably not... Anyway, I've had this card for about a week now, I ordered it on launch day and have been beating my head against a wall with drivers and other issues until finally getting it running properly! Since then, I've realized there's a significant lack of people actually testing this card and getting some real benchmarks out into the community. Something something be the change you want to see in the world, something something... So I've done some testing, and this certainly won't be the last of my tests and benchmarks, but it'll certainly be the first. I know what is on the community's mind. I hear you ask "How does the new Intel ca
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Why I Run 22 Docker Services at Home
Somewhere in my living room, a 2018 gaming PC is running 22 Docker containers, processing 15,000 emails through a local LLM, and managing the finances of a real business. It was never supposed to do any of this. I run a one-person software consultancy in the Netherlands; web development, 3D printing, and consulting. Last year, I started building an AI system to help me manage it all. Eight specialized agents handling email triage, financial tracking, infrastructure monitoring, and scheduling. Every piece of inference runs locally. No cloud APIs touching my private data. This post covers the hardware, what it actually costs, and what I'd do differently if I started over. The Setup: Three Machines, One Mesh Network The entire system runs on three machines connected via Tailscale mesh VPN: do
![How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]](https://media2.dev.to/dynamic/image/width=1200,height=627,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap1l58ek0p6aqj2yrzi6.png)
How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]
You want ChatGPT on your website. Maybe for customer support. Maybe to answer FAQs automatically. Or maybe you're running live events and need AI to handle the flood of questions pouring into your chat room. Learning how to embed ChatGPT in your website is simpler than you think - but there's more to consider than most guides tell you. Here's the thing: most guides only cover half the picture. They show you how to add a basic AI chatbot widget. But what happens when 5,000 people hit your site during a product launch? What about moderating AI responses before your chatbot tells a customer something embarrassingly wrong? And what if you need AI assistance in a group chat, not just a 1-to-1 support conversation? To embed ChatGPT in your website, you have two main approaches: use a no-code pla




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!