Turing Winner LeCun’s New ‘World Model’ AI Lab Raises $1B In Europe’s Largest Seed Round Ever - Crunchbase News
Turing Winner LeCun’s New ‘World Model’ AI Lab Raises $1B In Europe’s Largest Seed Round Ever Crunchbase News
Could not retrieve the full article text.
Read on GNews AI startups →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeleuropeworld modelIs 1-bit and TurboQuant the future of OSS? A simulation for Qwen3.5 models.
Simulation what the Qwen3.5 model family would look like using 1-bit technology and TurboQuant. The table below shows the results, this would be a revolution: Model Parameters Q4_K_M File (Current) KV Cache (256K) (Current) Hypothetical 1-bit Weights KV Cache 256K with TurboQuant Hypothetical Total Memory Usage Qwen3.5-122B-A10B 122B total / 10B active 74.99 GB 81.43 GB 17.13 GB 1.07 GB 18.20 GB Qwen3.5-35B-A3B 35B total / 3B active 21.40 GB 26.77 GB 4.91 GB 0.89 GB 5.81 GB Qwen3.5-27B 27B 17.13 GB 34.31 GB 3.79 GB 2.86 GB 6.65 GB Qwen3.5-9B 9B 5.89 GB 14.48 GB 1.26 GB 1.43 GB 2.69 GB Qwen3.5-4B 4B 2.87 GB 11.46 GB 0.56 GB 1.43 GB 1.99 GB Qwen3.5-2B 2B 1.33 GB 4.55 GB 0.28 GB 0.54 GB 0.82 GB submitted by /u/GizmoR13 [link] [comments]
SOTA Language Models Under 14B?
Hey guys, I was wondering what recent state-of-the-art small language models are the best for general question-answering task (diverse topics including math)? Any good/bad experience with specific models? Thank you! submitted by /u/No-Mud-1902 [link] [comments]
[New Model] - CatGen v2 - generate 128px images of cats with this GAN
Hey, r/LocalLLaMA ! I am back with a new model - no transformer but a GAN! It is called CatGen v2 and it generates 128x128px of cats. You can find the full source code, samples and the final model here: https://huggingface.co/LH-Tech-AI/CatGen-v2 Look at this sample after epoch 165 (trained on a single Kaggle T4 GPU): https://preview.redd.it/t1k3v71auqsg1.png?width=1146 format=png auto=webp s=26b4639eb7f9635d8b58a24633f8e4125859fd9e Feedback is very welcome :D submitted by /u/LH-Tech_AI [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Is 1-bit and TurboQuant the future of OSS? A simulation for Qwen3.5 models.
Simulation what the Qwen3.5 model family would look like using 1-bit technology and TurboQuant. The table below shows the results, this would be a revolution: Model Parameters Q4_K_M File (Current) KV Cache (256K) (Current) Hypothetical 1-bit Weights KV Cache 256K with TurboQuant Hypothetical Total Memory Usage Qwen3.5-122B-A10B 122B total / 10B active 74.99 GB 81.43 GB 17.13 GB 1.07 GB 18.20 GB Qwen3.5-35B-A3B 35B total / 3B active 21.40 GB 26.77 GB 4.91 GB 0.89 GB 5.81 GB Qwen3.5-27B 27B 17.13 GB 34.31 GB 3.79 GB 2.86 GB 6.65 GB Qwen3.5-9B 9B 5.89 GB 14.48 GB 1.26 GB 1.43 GB 2.69 GB Qwen3.5-4B 4B 2.87 GB 11.46 GB 0.56 GB 1.43 GB 1.99 GB Qwen3.5-2B 2B 1.33 GB 4.55 GB 0.28 GB 0.54 GB 0.82 GB submitted by /u/GizmoR13 [link] [comments]
SOTA Language Models Under 14B?
Hey guys, I was wondering what recent state-of-the-art small language models are the best for general question-answering task (diverse topics including math)? Any good/bad experience with specific models? Thank you! submitted by /u/No-Mud-1902 [link] [comments]
[New Model] - CatGen v2 - generate 128px images of cats with this GAN
Hey, r/LocalLLaMA ! I am back with a new model - no transformer but a GAN! It is called CatGen v2 and it generates 128x128px of cats. You can find the full source code, samples and the final model here: https://huggingface.co/LH-Tech-AI/CatGen-v2 Look at this sample after epoch 165 (trained on a single Kaggle T4 GPU): https://preview.redd.it/t1k3v71auqsg1.png?width=1146 format=png auto=webp s=26b4639eb7f9635d8b58a24633f8e4125859fd9e Feedback is very welcome :D submitted by /u/LH-Tech_AI [link] [comments]

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!