April Fools’ Day: Viral Google Gemini Nano Banana prompts to prank your friends and family on 1st April - news24online.com
<a href="https://news.google.com/rss/articles/CBMi1gFBVV95cUxOaVlzcEFCRHlPZVBzX01abjJtOW80RUZtQXFvNFEybVBqTEdwOXNGUnB0Z1hVMXlKUG1YajN2b2xSeEtGSklRcjE2Sm1VT0lRbWZIdm5lVkJDcndkTlBMcmVIU2pzajRIQkFYYXZUcGo2T0VvZzFreFU3S3M1ek1GS01yTGUySnhQSHduUUp3azF2dTNFQWZHZTlydUdnbUdOa0FDNFhaWExrekM4T3YzSC1GS1pYazgzT09kTVBxaDloX3JIM2RkYUFuN1U5N043TENHOEVB0gHaAUFVX3lxTE84aE5kZEV5Zkt4ay1NVENUM2RjM3ZXR1BNcWV3V3JzVS1JVGd0UjIzd2lmNHQ1eUlqT3BCek9BSlVuQ1pIUE9XUzVkUEN6Njl0ZlFRS0hicVNkUU5TTG5tU0daQWpidFdyMXJNUTJfMEwwcXB1Z2RuV0ExUGROd0o5aFJabGZHcUpaTHVYQ09MOWVzOW40Y3VleGpKMWxrZUpkV0dvWVhxekw0V09oOUt1NGtlRDZqeG5rYXU5eThOQ1FQVmxkWWc4T2ZPOUVEb0VFdUs1Tm12aWd3?oc=5" target="_blank">April Fools’ Day: Viral Google Gemini Nano Banana prompts to prank your friends and family on 1st April</a> <font color="#6f6f6f">news24online.
Could not retrieve the full article text.
Read on Google News: Gemini →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
gemini
You test your code. Why aren’t you testing your AI instructions?
You test your code. Why aren't you testing your AI instructions? Why instruction quality matters more than model choice, and a tool to measure it. Every team using AI coding tools writes instruction files. CLAUDE.md for Claude Code, AGENTS.md for Codex, copilot-instructions.md for GitHub Copilot, .cursorrules for Cursor. You spend time crafting these files, change a paragraph, push it, and hope for the best. Your codebase has tests. Your APIs have contracts. Your AI instructions have hope. I built agenteval to fix that. The variable nobody is testing A recent study tested three agent frameworks running the same model on 731 coding problems. Same model. Same tasks. The only difference was the instruction scaffolding. The spread was 17 points. We obsess over which model to use. Sonnet vs Opu

I Built a Visual Spec-Driven Development Extension for VS Code That Works With Any LLM
I Built a Visual Spec-Driven Development Extension for VS Code That Works With Any LLM The Problem If you've tried GitHub's Spec Kit , you know the value of spec-driven development: define requirements before coding, let AI generate structured specs, plans, and tasks. It's a great workflow. But there's a gap. Spec Kit works through slash commands in chat. No visual UI, no progress tracking, no approval workflow. You type /speckit.specify , read the output, type /speckit.plan , and so on. It works, but it's not visual. Kiro (Amazon's VS Code fork) offers a visual experience — but locks you into their specific LLM and requires leaving VS Code for a custom fork. I wanted both: a visual workflow inside VS Code that works with any LLM I choose . So I built Caramelo . What Caramelo Does Caramelo
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
langchain-core==1.2.26
Changes since langchain-core==1.2.25 release(core): 1.2.26 ( #36511 ) fix(core): add init validator and serialization mappings for Bedrock models ( #34510 ) feat(core): add ChatBaseten to serializable mapping ( #36510 ) chore(core): drop gpt-3.5-turbo from docstrings ( #36497 ) fix(core): correct parameter names in filter_messages docstring example ( #36462 )
![[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA](https://preview.redd.it/qbx94xeeo2tg1.png?width=140&height=93&auto=webp&s=39ed7f02dad84ccf081f932903c016c7983d4fcd)
[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA
Hi everyone, I am from Australia : ) I just released a new research prototype It’s a lossless BF16 compression format that stores weights in 12 bits by replacing the 8-bit exponent with a 4-bit group code . For 99.97% of weights , decoding is just one integer ADD . Byte-aligned split storage: true 12-bit per weight, no 16-bit padding waste, and zero HBM read amplification. Yes 12 bit not 11 bit !! The main idea was not just “compress weights more”, but to make the format GPU-friendly enough to use directly during inference : sign + mantissa: exactly 1 byte per element group: two nibbles packed into exactly 1 byte too https://preview.redd.it/qbx94xeeo2tg1.png?width=1536 format=png auto=webp s=831da49f6b1729bd0a0e2d1f075786274e5a7398 1.33x smaller than BF16 Fixed-rate 12-bit per weight , no

Quoting Greg Kroah-Hartman
Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us. Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real. Greg Kroah-Hartman , Linux kernel maintainer ( bio ), in conversation with Steven J. Vaughan-Nichols Tags: security , linux , generative-ai , ai , llms , ai-security-research

Quoting Daniel Stenberg
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense. Daniel Stenberg , lead developer of cURL Tags: daniel-stenberg , security , curl , generative-ai , ai , llms , ai-security-research


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!