Lilly and Novo Show How AI Is Rewiring Big Pharma - PYMNTS.com
<a href="https://news.google.com/rss/articles/CBMipwFBVV95cUxOTk1uZ3UzWl80ZloxU2xkT1laUGpaQXZjZW8tbmo0ZlFLYUpMaG1ObURUU3h5VkpXNHp3Z0tHakVqVXBQMzRhUTh0X250cXRHaHI2S2JadU56LVRwQXZJMXhLdm1IRGptLUtXakpBMFc4ZmpRcGVERk05bEF5WWhFUDNNWnlDYVBjWnJFMnhCckhKTkNNb1hWVnZwcnZtZU5td3dmcVdjQQ?oc=5" target="_blank">Lilly and Novo Show How AI Is Rewiring Big Pharma</a> <font color="#6f6f6f">PYMNTS.com</font>
Could not retrieve the full article text.
Read on GNews AI drug discovery →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Anyone else notice qwen 3.5 is a lying little shit
Any time I catch it messing up it just lies and tries to hide it’s mistakes . This is the 1st model I’m caught doing this multiple times. I’m have llms hallucinate or be just completely wrong but qwen will say it did something, I call it out then it goes and double downs on its lie “I did do it like you asked “ and when I call it out it 1/2 admits to being wrong. It’s kinda funny how much it doesn’t want to admit it didn’t do what it was supposed to. submitted by /u/Cat5edope [link] [comments]
Running SmolLM2‑360M on a Samsung Galaxy Watch 4 (380MB RAM) – 74% RAM reduction in llama.cpp
I’ve got SmolLM2‑360M running on a Samsung Galaxy Watch 4 Classic (about 380MB free RAM) by tweaking llama.cpp and the underlying ggml memory model. By default, the model was being loaded twice in RAM: once via the APK’s mmap page cache and again via ggml’s tensor allocations, peaking at 524MB for a 270MB model. The fix: I pass host_ptr into llama_model_params , so CPU tensors point directly into the mmap region and only Vulkan tensors are copied. On real hardware this gives: Peak RAM: 524MB → 142MB (74% reduction) First boot: 19s → 11s Second boot: ~2.5s (mmap + KV cache warm) Code: https://github.com/Perinban/llama.cpp/tree/axon‑dev Longer write‑up with VmRSS traces and design notes: https://www.linkedin.com/posts/perinban-parameshwaran_machinelearning-llm-embeddedai-activity-74453741179

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!