How I Built a Desktop Trading Journal with Electron, React, and SQLite
Last week I shipped a desktop app called Aurafy. It's a trading journal for futures traders that runs entirely locally. No cloud, no accounts, no subscription. I wanted to share the technical decisions behind it because I think the "local first" approach is underrated for tools that handle sensitive financial data. The Stack The app is built as a monorepo with three pieces: Server: Express.js + better-sqlite3. The server runs inside the Electron process (no child process spawn, which cuts startup time to under 2 seconds). SQLite with WAL mode handles all persistence. Every write uses synchronous = FULL because losing trade data is unacceptable. Client: React + Vite + Tailwind CSS + Recharts. Standard SPA that talks to the Express server over localhost. TanStack Query handles all data fetch
Last week I shipped a desktop app called Aurafy. It's a trading journal for futures traders that runs entirely locally. No cloud, no accounts, no subscription. I wanted to share the technical decisions behind it because I think the "local first" approach is underrated for tools that handle sensitive financial data.
The Stack
The app is built as a monorepo with three pieces:
Server: Express.js + better-sqlite3. The server runs inside the Electron process (no child process spawn, which cuts startup time to under 2 seconds). SQLite with WAL mode handles all persistence. Every write uses synchronous = FULL because losing trade data is unacceptable.
Client: React + Vite + Tailwind CSS + Recharts. Standard SPA that talks to the Express server over localhost. TanStack Query handles all data fetching and caching.
Electron wrapper: The main process starts the Express server in-process, opens a BrowserWindow pointing to localhost, and handles native features like screen recording permissions and floating camera windows.
Why Local First?
Trading data is sensitive. Your P&L, your account sizes, your mistakes. Most trading journals upload all of this to their cloud. I didn't want that.
With SQLite, everything lives in ~/Library/Application Support/aurafy/data/journal.db. The user can back it up, move it, or delete it. No API keys, no OAuth flows, no "please log in again."
The tradeoff is no cross-device sync. But for a trading journal that you use at your desk, this hasn't been an issue. Traders don't journal on their phones.
Screen Recording in Electron
One feature I'm proud of is the built-in screen recorder. Traders record their sessions to review later, similar to how athletes watch game film.
Electron's desktopCapturer API provides screen capture. I combine it with getUserMedia for microphone input, mix both audio streams using the Web Audio API, and feed everything into a MediaRecorder.
The camera overlay is a separate BrowserWindow with transparent: true, alwaysOnTop: true, and frame: false. It floats above all apps like Loom's camera bubble. The HTML is dead simple: a circular div with a video element showing the webcam feed.
CSV Import with Auto-Detection
Traders export their data from platforms like Tradovate and NinjaTrader as CSV files. The challenge is that every platform uses different column names, date formats, and instrument naming conventions.
Tradovate calls the instrument "MNQM6" while NinjaTrader calls it "MNQ 06-26". Both mean the same thing: Micro Nasdaq futures, June 2026 contract.
I wrote a parser that:
-
Auto-detects the platform by checking column headers
-
Normalizes instrument names using regex (strip the contract month code)
-
Matches to a local instruments table with tick sizes and point values
-
Pairs entry/exit executions and calculates P&L
The whole import flow is: drop CSV → see preview with detected trades → confirm. No manual mapping.
Lessons Learned
Run the server in-process. My first version spawned a Node child process for the Express server. This added 3+ seconds to startup and caused issues with macOS code signing (the OS saw it as two separate apps). Running Express inside Electron's main process using require() fixed both problems.
SQLite's WAL mode matters. Without it, writes block reads. With journal_mode = WAL and synchronous = FULL, you get concurrent reads during writes and guaranteed durability on crash.
Electron auto-update is fragile. electron-updater creates draft releases on GitHub, which means download URLs return 404 until you manually publish them. I added a CI step that auto-publishes after both Mac and Windows builds complete.
ELECTRON_RUN_AS_NODE will haunt you. If this env var is set (which it is in some development environments), Electron runs as plain Node.js and require('electron') returns a string instead of the module. I spent hours debugging this.
Try It
Aurafy is free and available at aurafy.dev. The code handles futures contracts (ES, NQ, CL, MES, MNQ) with proper point values and tick sizes.
If you're interested in the Electron + Express + SQLite architecture pattern, happy to answer questions in the comments.
DEV Community
https://dev.to/william_b898ff4ee6a7e992f/how-i-built-a-desktop-trading-journal-with-electron-react-and-sqlite-5hloSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
releaseavailableversion
My forays into cyborgism: theory, pt. 1
In this post, I share the thinking that lies behind the Exobrain system I have built for myself. In another post, I'll describe the actual system. I think the standard way of relating to LLM/AIs is as an external tool (or "digital mind") that you use and/or collaborate with. Instead of you doing the coding, you ask the LLM to do it for you. Instead of doing the research, you ask it to. That's great, and there is utility in those use cases. Now, while I hardly engage in the delusion that humans can have some kind of long-term symbiotic integration with AIs that prevents them from replacing us [1] , in the short term, I think humans can automate, outsource, and augment our thinking with LLM/AIs. We already augment our cognition with technologies such as writing and mundane software. Organizi
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Eight years of wanting, three months of building with AI
Eight years of wanting, three months of building with AI Lalit Maganti provides one of my favorite pieces of long-form writing on agentic engineering I've seen in ages. They spent eight years thinking about and then three months building syntaqlite , which they describe as " high-fidelity devtools that SQLite deserves ". The goal was to provide fast, robust and comprehensive linting and verifying tools for SQLite, suitable for use in language servers and other development tools - a parser, formatter, and verifier for SQLite queries. I've found myself wanting this kind of thing in the past myself, hence my (far less production-ready) sqlite-ast project from a few months ago. Lalit had been procrastinating on this project for years, because of the inevitable tedium of needing to work through

If LLMs Have No Memory, How Do They Remember Anything?
A technical but approachable guide to how large language models handle memory — from the math behind statelessness to the engineering behind systems that make AI feel like it actually knows you. An LLM is just a math function. A stateless one. Let’s start with the uncomfortable truth. At its core, a large language model — at inference time — is nothing more than a parameterized mathematical function. It takes an input, runs it through billions of learned parameters, and produces an output. Y = fθ(X) Here, X is your input (the prompt), θ (theta) represents all the learned weights baked into the model during training, and Y is the output — the response the model generates. Simple. But here’s the kicker: this function is stateless. What does “stateless” actually mean? Stateless means that whe

Building a Multi-Agent OS: Key Design Decisions That Matter
Introduction Most agent systems start at the top layer: a model, a persona, a tool list, and an orchestration wrapper. That works for demos. It does not hold up in production. State ends up split across conversations, approval logic hides inside prompts, and swapping a provider or runtime means rebuilding the loop. The useful questions sit lower in the stack: Which component owns task state? Which component enforces policy? Which surface lets operators inspect work and step in? Which event wakes execution? Which protocol must an executor follow to write results back? How do you project runtime capabilities into workspaces without drift? The essential part was discovering which boundaries actually matter. Across each iteration, the same correction kept showing up: centralize the truth, form



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!