1.13.0a6
<h2>What's Changed</h2> <h3>Documentation</h3> <ul> <li>Fix RBAC permission levels to match actual UI options (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="4187240054" data-permission-text="Title is private" data-url="https://github.com/crewAIInc/crewAI/issues/5210" data-hovercard-type="pull_request" data-hovercard-url="/crewAIInc/crewAI/pull/5210/hovercard" href="https://github.com/crewAIInc/crewAI/pull/5210">#5210</a>)</li> <li>Update changelog and version for v1.13.0a5 (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="4184635634" data-permission-text="Title is private" data-url="https://github.com/crewAIInc/crewAI/issues/5200" data-hovercard-type="pull_request" data-hovercard-url="/crewAIInc/crewAI/pull/5200/hover
What's Changed
Documentation
-
Fix RBAC permission levels to match actual UI options (#5210)
-
Update changelog and version for v1.13.0a5 (#5200)
Performance
- Reduce framework overhead by implementing a lazy event bus and skipping tracing when disabled (#5187)
Contributors
@alex-clawd, @joaomdmoura, @lucasgomide
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
versionupdategithub
Running Llama2 Models in Vanilla Minecraft With Pure Commands
I made a program that converts any llama2 large language model into a minecraft datapack, and you can run inference right inside the game. It's still semi-finished, Currently I've only implemented argmax sampling, so the output tends to stuck in loops sometimes. Adding top-p sampling will probably improve this a lot. The tokenizer is also missing for now, it can only generate text from scratch. Inference speed is...quite slow. With a 15M parameter model, it takes roughly 20 minutes to produce a single token. If you want to try it out yourself, you can download "stories15M.bin" and "tokenizer.bin" from llama2.c , and follow the instructions in my repository down below. I will keep working on this project, hopefully one day I will be able to bring a usable chat model in Minecraft. Github Rep

How to Self-Host Your Entire Dev Stack for Under $20/Month in 2026
How to Self-Host Your Entire Dev Stack for Under $20/Month in 2026 Every month, developers hand over $50, $100, or $200+ to a patchwork of SaaS tools. GitHub Teams for private repos. A CI/CD provider that charges per build minute. A database host that bills per connection. A monitoring service that costs more than the infrastructure it watches. Here is an alternative: self-host the entire stack on a single ARM server for under $20 a month. Not a toy setup. A real, production-capable developer environment with Git hosting, continuous integration, a deployment platform, a database, and uptime monitoring. This guide walks through exactly how to do it — with real tools, real pricing as of April 2026, and honest advice about what you should and should not self-host. Why Self-Host in 2026? Three

Hetzner Cloud for AI Projects — Complete GPU Server Setup & Cost Breakdown 2026
Hetzner Cloud for AI Projects — Complete GPU Server Setup Cost Breakdown 2026 Running AI workloads on AWS or GCP is expensive. A single A100 instance on AWS costs $3-4 per hour — over $2,000 a month if you leave it running. For startups, indie developers, and small teams experimenting with AI, that math kills projects before they start. Hetzner offers an alternative that most of the AI community outside Europe has not discovered yet. Budget cloud instances from €3.99/month for lightweight inference. Dedicated GPU servers with NVIDIA RTX 4000 Ada from €184/month. European data centers with flat monthly pricing and no bandwidth surprises. This guide covers the full Hetzner AI server lineup, from $5/month CPU instances running tiny models to dedicated GPU servers handling production workloads
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases
trunk/5d6292dfff853cd0559300c88d7330752c185e40: [Native DSL] Add torch.backends.python_native (#178902)
Summary: Adds user-facing control of python_native op overrides defined in torch._native . Allows for: Per-DSL control and information via. torch.backends.python_native.$dsl .name # (property) .available # (property) .enabled # (property, settable) .version # (property) .disable() # (method) .enable() # (method) .disabled() # (context manager) And module-level control via. torch.backends.python_native .available_dsls (property) .all_dsls (property) .get_dsl_operations() (method) .disable_operations() (method) .enable_operations() (method) .disable_dispatch_keys() (method) .enable_dispatch_keys() (method) .operations_disabled() (context) .operations_disabled() (context manager) Tests and docs for this functionality are also added. Test Plan: pytest -sv test/python_native/test_torch_backends

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!