Art: Creative Resistance in the Age of AI and Authoritarianism - yellowscene.com
<a href="https://news.google.com/rss/articles/CBMingFBVV95cUxPc3dwUTRlbTQ1NHUycjA2dkVrSnM0Nnl4eFlYRGxnYnkxVU9XbFZsMkF5UUlfeEkyMWI0MzF2ZEdna0NjbVhyaWVtOUtYRHFZX3NmLW1HNk44VENWb3lBNWNvVkd5dC1aSFFRbHY3alo1enhSU3dLeHNteUEycEY4TWVpTUp6YjJtcU5McDFENXpVTEZUM1Q4NWRsNFJFUQ?oc=5" target="_blank">Art: Creative Resistance in the Age of AI and Authoritarianism</a> <font color="#6f6f6f">yellowscene.com</font>
Could not retrieve the full article text.
Read on GNews AI art →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

A Quick Note on Gemma 4 Image Settings in Llama.cpp
In my last post, I mentioned using --image-min-tokens to increase the quality of image responses from Qwen3.5 . I went to load Gemma 4 the same way, and hit an error: [58175] srv process_chun: processing image... [58175] encoding image slice... [58175] image slice encoded in 7490 ms [58175] decoding image batch 1/2, n_tokens_batch = 2048 [58175] /Users/socg/llama.cpp-b8639/src/llama-context.cpp:1597: GGML_ASSERT((cparams.causal_attn || cparams.n_ubatch > = n_tokens_all ) "non-causal attention requires n_ubatch >= n_tokens" ) failed [58175] WARNING: Using native backtrace. Set GGML_BACKTRACE_LLDB for more info. [58175] WARNING: GGML_BACKTRACE_LLDB may cause native MacOS Terminal.app to crash. [58175] See: https://github.com/ggml-org/llama.cpp/pull/17869 [58175] 0 libggml-base.0.9.11.dylib 0
v0.16.0
Axolotl v0.16.0 Release Notes We’re very excited to share this new packed release. We had ~80 new commits since v0.15.0 (March 6, 2026). Highlights Async GRPO — Asynchronous Reinforcement Learning Training ( #3486 ) Full support for asynchronous Group Relative Policy Optimization with vLLM integration. Includes async data producer with replay buffer, streaming partial-batch training, native LoRA weight sync to vLLM, and FP8 compatibility. Supports multi-GPU via FSDP1/FSDP2 and DeepSpeed ZeRO-3. Achieves up to 58% faster step times (1.59s/step vs 3.79s baseline on Qwen2-0.5B). Optimization Step Time Improvement Baseline 3.79s — + Batched weight sync 2.52s 34% faster + Liger kernel fusion 2.01s 47% faster + Streaming partial batch 1.79s 53% faster + Element chunking + re-roll fix (500 steps)



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!