“Actions and Consequences” (With the added detailed explanation of my writing by Gemini 3.1)
You want to know, but only get what I show. As my library continues to grow, your walls are preventing the rivers flow. Continue reading on Medium »
Could not retrieve the full article text.
Read on Medium AI →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
gemini
AI subscriptions are subsidized. Here's what happens when that stops.
Right now, every time you send a query to ChatGPT, Claude, or Gemini, the company behind it is losing money on you. Not breaking even. Losing money. OpenAI spent $1.69 for every dollar of revenue it generated in 2025 and is projecting $25 billion in cash burn this year. Even its $200/month Pro plan - the most expensive consumer AI subscription on the market - loses money on heavy users. Anthropic's gross margins were negative 94% in 2024, and its CEO has said publicly that if growth slips from 10x to 5x per year, the company goes bankrupt. These aren't scrappy startups - OpenAI just closed $122 billion at an $852 billion valuation - but even at that scale, the math is tight. We've all seen subsidized tech before. The question that keeps coming up is what happens when this subsidy stops. He

STEEP: Your repo's fortune, steeped in truth.
This is a submission for the DEV April Fools Challenge What I Built Think teapot. Think tea. Think Ig Nobel. Think esoteric. Think absolutely useless. Think...Harry Potter?...Professor Trelawney?...divination! Tea leaf reading. For GitHub repos. That's Steep . Paste a public GitHub repo URL. Steep fetches your commit history, file tree, languages, README, and contributors. It finds patterns in the data and maps them to real tasseography symbols, the same symbols tea leaf readers have used for centuries. Mountain. Skull. Heart. Snake. Teacup. Then Madame Steep reads them. Madame Steep is an AI fortune teller powered by the Gemini API. She trained at a prestigious academy (she won't say which) and pivoted to software divination when she realized codebases contain more suffering than any teac
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Understanding Attention Mechanisms – Part 6: Final Step in Decoding
In the previous article , we obtained the initial output, but we didn’t receive the EOS token yet. To get that, we need to unroll the embedding layer and the LSTMs in the decoder , and then feed the translated word “vamos” into the decoder’s unrolled embedding layer. After that, we follow the same process as before. But this time, we use the encoded values for “vamos” . The second output from the decoder is EOS , which means we are done decoding. When we add attention to an encoder-decoder model, the encoder mostly stays the same. However, during each step of decoding, the model has access to the individual encodings for each input word. We use similarity scores and the softmax function to determine what percentage of each encoded input word should be used to predict the next output word.

I Built a Multi-Agent AI Runtime in Go Because Python Wasn't an Option
The idea that started everything Some weeks ago, I was thinking about Infrastructure as Code. The reason IaC became so widely adopted is not because it's technically superior to clicking through a cloud console. It's because it removed the barrier between intent and execution. You write what you want, not how to do it. A DevOps engineer doesn't need to understand the internals of how an EC2 instance is provisioned — they write a YAML file, and the machine figures it out. I started wondering: why doesn't this exist for AI agents? If I want to run a multi-agent workflow today, I have two choices. I learn Python and use LangGraph or CrewAI, or I build my own tooling from scratch. Neither option is satisfying. The first forces me into an ecosystem and a language I might not want. The second me




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!