Show HN: Ray – an open-source AI financial advisor that runs in your terminal
I've been using this daily for 4 months and figured others might find it useful. This is my first open source project so would love any feedback. Ray connects to your bank via Plaid, stores everything in an encrypted local SQLite database, and lets you ask questions about your finances in natural language. No cloud, no account, your data is stored on your machine. Before anything reaches the LLM, all PII is stripped — your name, companies, transaction details are redacted and replaced with tokens, then rehydrated locally in the response. The AI never sees who you are. Comments URL: https://news.ycombinator.com/item?id=47644133 Points: 6 # Comments: 2
Skip to content
Talk to your money
Ray is an AI financial advisor that connects to your bank, understands your full picture, and gives real advice—all local on your machine.
Your data stays localAES-256 encryptedMIT licensed5 min setup
3 stars on GitHub
The Apps
Dashboards show you what happened.
Mint, Copilot, Monarch — they sort your transactions into pie charts and send you notifications. They're good at showing you what you spent. They never tell you what to do about it. And when your subscription expires, so does your data.
The Spreadsheets
Powerful when you keep them updated.
You built the perfect spreadsheet once. Formulas, projections, a debt payoff timeline. But it only works when you do — and manual data entry doesn't survive a busy month. You haven't opened it since February.
Then there’s Ray
The advisor you’d hire if they weren’t $200/hour.
Ray connects directly to your bank accounts. It sees every transaction, every balance, every debt. When you ask “can I afford this?” it doesn’t guess — it queries your actual data, runs the math, and gives you a real answer. It remembers your goals, tracks your progress, and proactively flags problems before they become emergencies.
How it works
Privacy
Ray runs entirely on your computer. There’s no cloud, no account, no server storing your data. Your financial history lives in an encrypted database on your hard drive, and your name is scrubbed before anything reaches the AI.
Raw financial data
Sarah Chen earns $85k at Acme Corp.
James manages the household budget.
Checking balance: $4,802
Visa balance: -$1,200 @ 22.99% APR
PII in local databaseWhat the AI receives
Encrypted at rest
AES-256 encrypted database with scrypt key derivation. File permissions locked to your user account.
view source
No cloud storage
Everything stays in ~/.ray on your machine. Even with a Ray API key, data is processed in-flight and never stored on our servers.
Fully auditable
Every AI tool call is logged locally. You can see exactly what data was accessed and when.
view source
Two outbound calls
Plaid for bank sync, Anthropic for AI chat (PII-masked). That's it. No telemetry. No analytics.
view source
Integrations
Ray connects to 12,000+ financial institutions through Plaid — from major banks to local credit unions.
And 12,000+ more institutions supported via Plaid
What makes Ray different
Ray builds a profile of your life over time — your goals, your family, your career, your priorities. Every conversation makes it smarter. The advice you get on day 30 is nothing like day one.
Family situation
Married, two kids, one income
Career stage
Just started a new job at $95k
Financial goals
Pay off student loans by 2027
Risk tolerance
Conservative — no crypto
Life events
Baby due in March, buying a house
Past decisions
We cut DoorDash last month
What Ray can do
Ray has 30+ tools that query your real financial data. It looks things up, runs calculations, and takes action.
"Can I afford to take this trip?"
Ray projects your balance forward based on actual income and spending patterns. See the impact before you commit.
Only in Ray
"How's my score today?"
A daily 0-100 behavior score with streaks and unlockable achievements. No restaurants for a week? That's Kitchen Hero. Five zero-spend days? Monk Mode. It turns financial discipline into a game you actually want to play.
Only in Ray
"What did we decide last time?"
Ray remembers your goals, preferences, life events, and past decisions. Every conversation builds on the last one.
Only in Ray
"Where is all my money going?"
Category breakdowns, period comparisons, and trend detection. Ray finds the patterns you miss in your own spending.
"How much am I spending on food delivery month over month?"
Ray breaks down any category across any time range. Spot trends you'd never catch scrolling through transactions.
"Can you audit to make sure my tenants have paid for the past 12 months?"
Ray searches your real transaction history, flags gaps, and gives you a straight answer. Landlord, freelancer, whatever — if the data is in your bank, Ray can check it.
Pricing
A human financial advisor costs $200/hr. Ray costs $10/mo — or nothing at all.
Self-Hosted
full control
Bring your own keys
/forever
- Open source, MIT licensed
- Your own Anthropic API key
- Your own Plaid credentials
- Full model selection
- All features included
View source on GitHub
Steps to self-host~2 weeks
~20 min of work + 1-2 week wait for Plaid approval
Ray Hosted Keys
most popular
We handle everything
/month
- AI and bank access included
- No Plaid application needed
- Same privacy guarantees
- All features included
- Cancel anytimeSteps to sign up~5 min
Run ray setup to get your key
~2 min
Run ray link to connect bank
~1 min
Total: ~3 minutes
Ray is free, open source, and takes five minutes to set up. Your data never leaves your machine.
“I tried every finance app, built every spreadsheet, and talked to a financial advisor who charged $200/hr to tell me things I already knew. Nothing actually helped me make better decisions with my own money. So I built the thing I wanted — an advisor that knows my real numbers, runs locally, and is honest enough to open‑source.”
— Clark Dinnison, creator of Ray
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
open sourceopen-source
E-VLA: Event-Augmented Vision-Language-Action Model for Dark and Blurred Scenes
arXiv:2604.04834v1 Announce Type: cross Abstract: Robotic Vision-Language-Action (VLA) models generalize well for open-ended manipulation, but their perception is fragile under sensing-stage degradations such as extreme low light, motion blur, and black clipping. We present E-VLA, an event-augmented VLA framework that improves manipulation robustness when conventional frame-based vision becomes unreliable. Instead of reconstructing images from events, E-VLA directly leverages motion and structural cues in event streams to preserve semantic perception and perception-action consistency under adverse conditions. We build an open-source teleoperation platform with a DAVIS346 event camera and collect a real-world synchronized RGB-event-action manipulation dataset across diverse tasks and illumi

Single-agent vs. Multi-agents for Automated Video Analysis of On-Screen Collaborative Learning Behaviors
arXiv:2604.03631v1 Announce Type: new Abstract: On-screen learning behavior provides valuable insights into how students seek, use, and create information during learning. Analyzing on-screen behavioral engagement is essential for capturing students' cognitive and collaborative processes. The recent development of Vision Language Models (VLMs) offers new opportunities to automate the labor-intensive manual coding often required for multimodal video data analysis. In this study, we compared the performance of both leading closed-source VLMs (Claude-3.7-Sonnet, GPT-4.1) and open-source VLM (Qwen2.5-VL-72B) in single- and multi-agent settings for automated coding of screen recordings in collaborative learning contexts based on the ICAP framework. In particular, we proposed and compared two mu

Towards the AI Historian: Agentic Information Extraction from Primary Sources
arXiv:2604.03553v1 Announce Type: new Abstract: AI is supporting, accelerating, and automating scientific discovery across a diverse set of fields. However, AI adoption in historical research remains limited due to the lack of solutions designed for historians. In this technical progress report, we introduce the first module of Chronos, an AI Historian under development. This module enables historians to convert image scans of primary sources into data through natural-language interactions. Rather than imposing a fixed extraction pipeline powered by a vision-language model (VLM), it allows historians to adapt workflows for heterogeneous source corpora, evaluate the performance of AI models on specific tasks, and iteratively refine workflows through natural-language interaction with the Chr
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Autoencoder-Based Parameter Estimation for Superposed Multi-Component Damped Sinusoidal Signals
arXiv:2604.03985v1 Announce Type: cross Abstract: Damped sinusoidal oscillations are widely observed in many physical systems, and their analysis provides access to underlying physical properties. However, parameter estimation becomes difficult when the signal decays rapidly, multiple components are superposed, and observational noise is present. In this study, we develop an autoencoder-based method that uses the latent space to estimate the frequency, phase, decay time, and amplitude of each component in noisy multi-component damped sinusoidal signals. We investigate multi-component cases under Gaussian-distribution training and further examine the effect of the training-data distribution through comparisons between Gaussian and uniform training. The performance is evaluated through wavef

Cross Spectra Break the Single-Channel Impossibility
arXiv:2604.03775v1 Announce Type: cross Abstract: Lucente et al. proved that no time-irreversibility measure can detect departure from equilibrium in a scalar Gaussian time series from a linear system. We show that a second observed channel sharing the same hidden driver overcomes this impossibility: the cross-spectral block, structurally inaccessible to any single-channel measure, provides qualitatively new detectability. Under the diagonal null hypothesis, the cross-spectral detectability coefficient $\Scross$ (the leading quartic-order cross contribution) is \emph{exactly} independent of the observed timescales -- a cancellation governed solely by hidden-mode parameters -- and remains strictly positive at exact timescale coalescence, where all single-channel measures vanish. The mechani

RL-Driven Sustainable Land-Use Allocation for the Lake Malawi Basin
arXiv:2604.03768v1 Announce Type: new Abstract: Unsustainable land-use practices in ecologically sensitive regions threaten biodiversity, water resources, and the livelihoods of millions. This paper presents a deep reinforcement learning (RL) framework for optimizing land-use allocation in the Lake Malawi Basin to maximize total ecosystem service value (ESV). Drawing on the benefit transfer methodology of Costanza et al., we assign biome-specific ESV coefficients -- locally anchored to a Malawi wetland valuation -- to nine land-cover classes derived from Sentinel-2 imagery. The RL environment models a 50x50 cell grid at 500m resolution, where a Proximal Policy Optimization (PPO) agent with action masking iteratively transfers land-use pixels between modifiable classes. The reward function

E-VLA: Event-Augmented Vision-Language-Action Model for Dark and Blurred Scenes
arXiv:2604.04834v1 Announce Type: cross Abstract: Robotic Vision-Language-Action (VLA) models generalize well for open-ended manipulation, but their perception is fragile under sensing-stage degradations such as extreme low light, motion blur, and black clipping. We present E-VLA, an event-augmented VLA framework that improves manipulation robustness when conventional frame-based vision becomes unreliable. Instead of reconstructing images from events, E-VLA directly leverages motion and structural cues in event streams to preserve semantic perception and perception-action consistency under adverse conditions. We build an open-source teleoperation platform with a DAVIS346 event camera and collect a real-world synchronized RGB-event-action manipulation dataset across diverse tasks and illumi


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!