Ollama adopts MLX for faster AI performance on Apple silicon Macs - 9to5Mac
<a href="https://news.google.com/rss/articles/CBMingFBVV95cUxPWDZ6OFlUZEQ3QnU0d0ptcWZXMl9FMDFFWTJEQ2NqQ2VTV3BZNVV3cTcxaHZibWY2NTNfcUpidkhsRGplM1dFNlU3a3VNODJYM1FhVGJsVGo2SGNaNHdwcDBDb3NrYjk0djNkM0Eyb0FXTXRqQjR4WXk2SXBYTW9CNkxUOTZOcGk3OFV5dFk3cHVGeWZ0OFVLc05sTkNPdw?oc=5" target="_blank">Ollama adopts MLX for faster AI performance on Apple silicon Macs</a> <font color="#6f6f6f">9to5Mac</font>
Could not retrieve the full article text.
Read on GNews AI Apple →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
llamaollama
Good local models that can work locally on my system with tools support
So I have a gaming laptop, RTX 4070 (12 GB VRAM) + 32 GB RAM. I used llmfit to identify which models can I use on my rig, and almost all the runnable ones seem dumb when you ask it to read a file and execute something afterwards, some does nothing, some search the web, some understand that they need to read a file but can't seem to go beyond that. The ones suggested by Claude or Gemini are fairly the same ones I am trying. I am using Ollama + Claude code. I tried: qwen2.5-coder:7b, qwen3.5:9b, deepseek-r1:8b-0528-qwen3-q4_K_M, unsloth/qwen3-30B-A3B:Q4_K_M The last one, I need to disable thinking in Claude for it to actually start working and still fails! My plan is to plan using a frontier model, then execute said plan with a local model (not major projects or code base, just weekend ideat

What's the most optimized engine to run on a H100?
Hey guys, I was wondering what is the best/fastest engine to run LLMs on a single H100? I'm guessing VLLM is great but not the fastest. Thank you in advance. I'm running a LLama 3.1 8B model. submitted by /u/Obamos75 [link] [comments]
![[D] From the Web to World Models: The New Layer of Power](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-satellite-bUaXYHZsoMZjyA4XgfFqkD.webp)
[D] From the Web to World Models: The New Layer of Power
La participación de Tim Berners-Lee en el ecosistema emergente de inteligencia artificial no es un hecho aislado. Es una señal de continuidad histórica. La World Wide Web no solo democratizó el acceso a la información; estableció la primera gran infraestructura para organizar el conocimiento humano a escala global. No cambió únicamente lo que sabemos, sino cómo lo sabemos. Hoy, esa lógica evoluciona. Investigadores como Yann LeCun están impulsando un nuevo paradigma: modelos de inteligencia artificial que no se limitan a procesar lenguaje, sino que buscan entender el mundo físico. Los llamados world models representan un cambio cualitativo: del reconocimiento de patrones a la inferencia de causalidad, de la predicción textual a la comprensión operativa de la realidad. Esto marca el paso de
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!