it looks like it will be soon ππππ
https://github.com/ggml-org/llama.cpp/pull/21309 (thanks rerri ) from HF https://github.com/huggingface/transformers/pull/45192 [Gemma 4](INSET_PAPER_LINK) is a multimodal model with pretrained and instruction-tuned variants, available in 1B, 13B, and 27B parameters. The architecture is mostly the same as the previous Gemma versions. The key differences are a vision processor that can output images of fixed token budget and a spatial 2D RoPE to encode vision-specific information across height and width axis. this PR probably only applies to dense, so it must be separate for MoE submitted by /u/jacek2023 [link] [comments]
Could not retrieve the full article text.
Read on Reddit r/LocalLLaMA βReddit r/LocalLLaMA
https://www.reddit.com/r/LocalLLaMA/comments/1sajqb9/it_looks_like_it_will_be_soon/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
llamamodeltransformer![Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-matrix-rain-CvjLrWJiXfamUnvj5xT9J9.webp)
Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]
Hey guys, Iβm the same creator of Netryx V2, the geolocation tool. Iβve been working on something new called COGNEX. It learns how a person reacts to situations, then uses that pattern to simulate how they would respond to something new. You collect real stimulus and response pairs. A stimulus is an event. A response is what they said or did. The key is linking them properly. Then you convert both into structured signals instead of raw text. This is where TRIBE v2 comes in. It was released by Meta about two weeks ago, trained on fMRI scan data, and it can take text, audio, images, and video and estimate how a human brain would process that input. On its own, it reflects an average brain. It does not know the individual. COGNEX uses TRIBE to first map every stimulus and response into this s

How AI Actually Thinks - Explained So a 13-Year-Old Gets It
Tokens, training, context windows, and temperature β the four concepts that explain everything about large language models. You know how your phone suggests the next word when youβre texting? Type βIβm going to theβ and it suggests βstoreβ or βpark.β Now imagine that autocomplete was trained on every book, every website, every conversation ever written β and instead of suggesting one word, it could write entire essays, solve math problems, and generate working code. Thatβs fundamentally what a Large Language Model does. And once you understand four concepts β tokens, training, context windows, and temperature β youβll know more about how AI works than 95% of people who use it daily. No PhD required. Concept 1: Tokens β How AI Reads AI doesnβt read letters or words the way you do. It reads
Knowledge Map
Connected Articles β Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI
π₯ sponsors/atilaahmettaner
Advanced TradingView MCP Server for AI-powered market analysis. Real-time crypto & stock screening, technical indicators, Bollinger Band intelligence, and candlestick patterns. Works with Claude Desktop & AI assistants. Multi-exchange support (Binance, KuCoin, Bybit+). Open source trading toolkit. β Trending on GitHub today with 38 new stars.



Discussion
Sign in to join the discussion
No comments yet β be the first to share your thoughts!