How Rules for Publicly Available Data Are Shaping the Future of AI | Reports & Briefings | Mar 13, 2026 - Information Technology and Innovation Foundation (ITIF)
How Rules for Publicly Available Data Are Shaping the Future of AI | Reports & Briefings | Mar 13, 2026 Information Technology and Innovation Foundation (ITIF)
Could not retrieve the full article text.
Read on GNews AI USA →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
availablereport
A robust vision language model for molecular status prediction and radiology report generation in adult-type diffuse gliomas
npj Digital Medicine, Published online: 02 April 2026; doi:10.1038/s41746-026-02581-x A robust vision language model for molecular status prediction and radiology report generation in adult-type diffuse gliomas

Ran Qwen 3.5 27B via Ollama as a persistent background agent for 30 days. Not a demo. Honest results.
I wanted to know if a local LLM could handle recurring background tasks reliably over an extended period — not as a chatbot, but as a persistent worker that runs scheduled jobs, maintains context across sessions, and routes tool calls without human prompting. So I ran it for 30 days on real tasks from my actual workflow. Model: Qwen 3.5 27B via Ollama. Hardware: Mac with 32GB unified memory, but the architecture works on any machine that can run a 27B+ model locally. Setup Each agent runs in a persistent workspace with its own memory, skills, and MCP sidecars. The workspace structure separates human-authored instructions ( AGENTS.md ), model config and provider settings ( workspace.yaml ), modular capabilities ( skills/ ), and installed workspace apps ( apps/ ). Memory lives in a separate

p-e-w/gemma-4-E2B-it-heretic-ara: Gemma 4's defenses shredded by Heretic's new ARA method 90 minutes after the official release
Google's Gemma models have long been known for their strong "alignment" (censorship). I am happy to report that even the latest iteration, Gemma 4, is not immune to Heretic's new Arbitrary-Rank Ablation (ARA) method, which uses matrix optimization to suppress refusals. Here is the result: https://huggingface.co/p-e-w/gemma-4-E2B-it-heretic-ara And yes, it absolutely does work. It answers questions properly, few if any evasions as far as I can tell. And there is no obvious model damage either. What you need to reproduce (and, presumably, process the other models as well): git clone -b ara https://github.com/p-e-w/heretic.git cd heretic pip install . pip install git+https://github.com/huggingface/transformers.git heretic google/gemma-4-E2B-it From my limited experiments (hey, it's only been
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!