Decoding Functional Networks for Visual Categories via GNNs
arXiv:2603.28931v1 Announce Type: new Abstract: Understanding how large-scale brain networks represent visual categories is fundamental to linking perception and cortical organization. Using high-resolution 7T fMRI from the Natural Scenes Dataset, we construct parcel-level functional graphs and train a signed Graph Neural Network that models both positive and negative interactions, with a sparse edge mask and class-specific saliency. The model accurately decodes category-specific functional connectivity states (sports, food, vehicles) and reveals reproducible, biologically meaningful subnetworks along the ventral and dorsal visual pathways. This framework bridges machine learning and neuroscience by extending voxel-level category selectivity to a connectivity-based representation of visual
View PDF HTML (experimental)
Abstract:Understanding how large-scale brain networks represent visual categories is fundamental to linking perception and cortical organization. Using high-resolution 7T fMRI from the Natural Scenes Dataset, we construct parcel-level functional graphs and train a signed Graph Neural Network that models both positive and negative interactions, with a sparse edge mask and class-specific saliency. The model accurately decodes category-specific functional connectivity states (sports, food, vehicles) and reveals reproducible, biologically meaningful subnetworks along the ventral and dorsal visual pathways. This framework bridges machine learning and neuroscience by extending voxel-level category selectivity to a connectivity-based representation of visual processing.
Comments: Accepted for publication in IEEE International Symposium on Biomedical Imaging (ISBI) 2026
Subjects:
Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.28931 [cs.CV]
(or arXiv:2603.28931v1 [cs.CV] for this version)
https://doi.org/10.48550/arXiv.2603.28931
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Shira Karmi [view email] [v1] Mon, 30 Mar 2026 19:10:28 UTC (3,862 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelneural networkannounceAnyone else notice qwen 3.5 is a lying little shit
Any time I catch it messing up it just lies and tries to hide it’s mistakes . This is the 1st model I’m caught doing this multiple times. I’m have llms hallucinate or be just completely wrong but qwen will say it did something, I call it out then it goes and double downs on its lie “I did do it like you asked “ and when I call it out it 1/2 admits to being wrong. It’s kinda funny how much it doesn’t want to admit it didn’t do what it was supposed to. submitted by /u/Cat5edope [link] [comments]
APEX MoE quantized models boost with 33% faster inference and TurboQuant (14% of speedup in prompt processing)
I've just released APEX (Adaptive Precision for EXpert Models): a novel MoE quantization technique that outperforms Unsloth Dynamic 2.0 on accuracy while being 2x smaller for MoE architectures. Benchmarked on Qwen3.5-35B-A3B, but the method applies to any MoE model. Half the size of Q8. Perplexity comparable to F16. Works with stock llama.cpp with no patches. Open source (of course!), with github.com/mudler/LocalAI team! https://preview.redd.it/uv2bnfheymsg1.jpg?width=1632 format=pjpg auto=webp s=3eca979e8f9ca6b75d206eecdf29308b74aed530 Perplexity by itself doesn't say the full story. KL divergence tells a story perplexity doesn't: https://preview.redd.it/jn9ua2ksymsg1.jpg?width=1617 format=pjpg auto=webp s=7df969308e10aa6b6d31098c92fca1c14bb42a40 Tiers for every GPU: - I-Quality: 21.3 GB
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Anyone else notice qwen 3.5 is a lying little shit
Any time I catch it messing up it just lies and tries to hide it’s mistakes . This is the 1st model I’m caught doing this multiple times. I’m have llms hallucinate or be just completely wrong but qwen will say it did something, I call it out then it goes and double downs on its lie “I did do it like you asked “ and when I call it out it 1/2 admits to being wrong. It’s kinda funny how much it doesn’t want to admit it didn’t do what it was supposed to. submitted by /u/Cat5edope [link] [comments]
Running SmolLM2‑360M on a Samsung Galaxy Watch 4 (380MB RAM) – 74% RAM reduction in llama.cpp
I’ve got SmolLM2‑360M running on a Samsung Galaxy Watch 4 Classic (about 380MB free RAM) by tweaking llama.cpp and the underlying ggml memory model. By default, the model was being loaded twice in RAM: once via the APK’s mmap page cache and again via ggml’s tensor allocations, peaking at 524MB for a 270MB model. The fix: I pass host_ptr into llama_model_params , so CPU tensors point directly into the mmap region and only Vulkan tensors are copied. On real hardware this gives: Peak RAM: 524MB → 142MB (74% reduction) First boot: 19s → 11s Second boot: ~2.5s (mmap + KV cache warm) Code: https://github.com/Perinban/llama.cpp/tree/axon‑dev Longer write‑up with VmRSS traces and design notes: https://www.linkedin.com/posts/perinban-parameshwaran_machinelearning-llm-embeddedai-activity-74453741179

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!