Desktop Canary v2.1.48-canary.35
π€ Canary Build β v2.1.48-canary.35 Automated canary build from canary branch. Commit Information Based on changes since v2.1.48-canary.34 Commit count: 2 25cf3bfafd π fix(userMemories): i18n for purge button ( #13569 ) (Neko) 3cb7206d90 β¨ feat: create new topic every 4 hours ( #13570 ) (Rdmclin2) β οΈ Important Notes This is an automated canary build and is NOT intended for production use. Canary builds are triggered by build / fix / style commits on the canary branch. May contain unstable or incomplete changes . Use at your own risk. It is strongly recommended to back up your data before using a canary build. π¦ Installation Download the appropriate installer for your platform from the assets below. Platform File macOS (Apple Silicon) .dmg (arm64) macOS (Intel) .dmg (x64) Windows .exe Lin
π€ Canary Build β v2.1.48-canary.35
Automated canary build from canary branch.
Commit Information
-
Based on changes since v2.1.48-canary.34
-
Commit count: 2
-
25cf3bfafd π fix(userMemories): i18n for purge button (#13569) (Neko)
-
3cb7206d90 β¨ feat: create new topic every 4 hours (#13570) (Rdmclin2)
β οΈ Important Notes
-
This is an automated canary build and is NOT intended for production use.
-
Canary builds are triggered by build/fix/style commits on the canary branch.
-
May contain unstable or incomplete changes. Use at your own risk.
-
It is strongly recommended to back up your data before using a canary build.
π¦ Installation
Download the appropriate installer for your platform from the assets below.
Platform File
macOS (Apple Silicon)
.dmg (arm64)
macOS (Intel)
.dmg (x64)
Windows
.exe
Linux
.AppImage / .deb
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
productplatform
My forays into cyborgism: theory, pt. 1
In this post, I share the thinking that lies behind the Exobrain system I have built for myself. In another post, I'll describe the actual system. I think the standard way of relating to LLM/AIs is as an external tool (or "digital mind") that you use and/or collaborate with. Instead of you doing the coding, you ask the LLM to do it for you. Instead of doing the research, you ask it to. That's great, and there is utility in those use cases. Now, while I hardly engage in the delusion that humans can have some kind of long-term symbiotic integration with AIs that prevents them from replacing us [1] , in the short term, I think humans can automate, outsource, and augment our thinking with LLM/AIs. We already augment our cognition with technologies such as writing and mundane software. Organizi

Eight years of wanting, three months of building with AI
Eight years of wanting, three months of building with AI Lalit Maganti provides one of my favorite pieces of long-form writing on agentic engineering I've seen in ages. They spent eight years thinking about and then three months building syntaqlite , which they describe as " high-fidelity devtools that SQLite deserves ". The goal was to provide fast, robust and comprehensive linting and verifying tools for SQLite, suitable for use in language servers and other development tools - a parser, formatter, and verifier for SQLite queries. I've found myself wanting this kind of thing in the past myself, hence my (far less production-ready) sqlite-ast project from a few months ago. Lalit had been procrastinating on this project for years, because of the inevitable tedium of needing to work through

If LLMs Have No Memory, How Do They Remember Anything?
A technical but approachable guide to how large language models handle memory β from the math behind statelessness to the engineering behind systems that make AI feel like it actually knows you. An LLM is just a math function. A stateless one. Letβs start with the uncomfortable truth. At its core, a large language model β at inference time β is nothing more than a parameterized mathematical function. It takes an input, runs it through billions of learned parameters, and produces an output. Y = fΞΈ(X) Here, X is your input (the prompt), ΞΈ (theta) represents all the learned weights baked into the model during training, and Y is the output β the response the model generates. Simple. But hereβs the kicker: this function is stateless. What does βstatelessβ actually mean? Stateless means that whe
Knowledge Map
Connected Articles β Knowledge Graph
This article is connected to other articles through shared AI topics and tags.



Discussion
Sign in to join the discussion
No comments yet β be the first to share your thoughts!