ByteShape Qwen 3.5 9B: A Guide to Picking the Best Quant for Your Hardware
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s8weo2/byteshape_qwen_35_9b_a_guide_to_picking_the_best/"> <img src="https://preview.redd.it/rdaoe5qudfsg1.png?width=640&crop=smart&auto=webp&s=f6b1c3b27aaa6c79d3e48ad77fb6e3e0f4c2f493" alt="ByteShape Qwen 3.5 9B: A Guide to Picking the Best Quant for Your Hardware" title="ByteShape Qwen 3.5 9B: A Guide to Picking the Best Quant for Your Hardware" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>Hey <a href="/r/LocalLLaMA">r/LocalLLaMA</a></p> <p>We’ve released our ByteShape Qwen 3.5 9B quantizations.</p> <p><a href="https://byteshape.com/blogs/Qwen3.5-9B/">Read our Blog</a> / <a href="https://huggingface.co/byteshape/Qwen3.5-9B-GGUF">Download Models</a></p> <p>The goal is not just to <em>publish files</
Could not retrieve the full article text.
Read on Reddit r/LocalLLaMA →Reddit r/LocalLLaMA
https://www.reddit.com/r/LocalLLaMA/comments/1s8weo2/byteshape_qwen_35_9b_a_guide_to_picking_the_best/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
llamamodelbenchmark![Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-matrix-rain-CvjLrWJiXfamUnvj5xT9J9.webp)
Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]
Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a person reacts to situations, then uses that pattern to simulate how they would respond to something new. You collect real stimulus and response pairs. A stimulus is an event. A response is what they said or did. The key is linking them properly. Then you convert both into structured signals instead of raw text. This is where TRIBE v2 comes in. It was released by Meta about two weeks ago, trained on fMRI scan data, and it can take text, audio, images, and video and estimate how a human brain would process that input. On its own, it reflects an average brain. It does not know the individual. COGNEX uses TRIBE to first map every stimulus and response into this s
![[D] ICML Rebuttal Question](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-satellite-bUaXYHZsoMZjyA4XgfFqkD.webp)
[D] ICML Rebuttal Question
I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that the method is not "novel". We were able to address all other concerns, but the reviewers keep up with this argument. The issue is that our approach is mostly novel. We are able to outperform all baselines, and even a set of baselines which our method should not have been able to outperform. We achieve this through unexpected means, whereby we exactly could pinpoint the reasons why we could do this. Everyone in our field are surprised with these results, and says they are sort of groundbreaking for the field. However, we were able to do this by combining existing components, which were never used in our domain. We also introduced novel components, but the
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI
🔥 sponsors/atilaahmettaner
Advanced TradingView MCP Server for AI-powered market analysis. Real-time crypto & stock screening, technical indicators, Bollinger Band intelligence, and candlestick patterns. Works with Claude Desktop & AI assistants. Multi-exchange support (Binance, KuCoin, Bybit+). Open source trading toolkit. — Trending on GitHub today with 38 new stars.



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!