datasette-enrichments-llm 0.2a1
Hi there, little explorer! 🚀
Imagine you have a super-duper robot helper, like a friendly computer brain! This robot can make your toy box super tidy or tell you fun facts about your drawings.
Sometimes, when you ask the robot to do something special (like making your toys tidy), it now knows who asked for it! "Oh, Lily asked me to tidy!" or "Max wants to know about dinosaurs!"
This little update helps the robot remember who is playing with it, so it can be even smarter and more helpful for you! It's like your robot friend just got a tiny bit better at listening! ✨🤖
<p><strong>Release:</strong> <a href="https://github.com/datasette/datasette-enrichments-llm/releases/tag/0.2a1">datasette-enrichments-llm 0.2a1</a></p> <blockquote> <ul> <li>The <code>actor</code> who triggers an enrichment is now passed to the <code>llm.mode(... actor=actor)</code> method. <a href="https://github.com/datasette/datasette-enrichments-llm">#3</a></li> </ul> </blockquote> <p>Tags: <a href="https://simonwillison.net/tags/enrichments">enrichments</a>, <a href="https://simonwillison.net/tags/llm">llm</a>, <a href="https://simonwillison.net/tags/datasette">datasette</a></p>
This is a beat by Simon Willison, posted on 1st April 2026.
datasette 1469
llm 581
enrichments 9
Monthly briefing
Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments.
Pay me to send you less!
Sponsor & subscribe
Simon Willison Blog
https://simonwillison.net/2026/Apr/1/datasette-enrichments-llm-2/#atom-everythingSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Quantizers appriciation post
Hey everyone, Yesterday I decided to try and learn how to quantize ggufs myself with reasonable quality, in order to understand the magic behind the curtain. Holy... I did not expect how much work it is, how long it takes, and requires A LOT (500GB!) of storage space for just Gemma-4-26B-A4B in various sizes. There really is an art to configuring them too, with variations between architectures and quant types. Thanks to unsloth releasing their imatrix file and huggingface showing the weight types inside their viewer, I managed to cobble something together without LLM assistance. I ran into a few hiccups and some of the information is a bit confusing, so I documented my process in the hopes of making it easier for someone else to learn and experiment. My recipe and full setup guide can be f
trunk/6c6e22937db24fe8c7b74452a6d3630c65d1c8b8: Revert "Remove TRITON=yes from CPU-only GCC11 docker configs (#179314)"
This reverts commit 670be7c . Reverted #179314 on behalf of https://github.com/izaitsevfb due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ( comment )





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!