Announcing agentic AI for healthcare patient engagement in Amazon Connect (Preview) - Amazon Web Services (AWS)
<a href="https://news.google.com/rss/articles/CBMivwFBVV95cUxObWNNSDljc0k0Z3V3ZnVQdTlUVG9oTnRqT2wtR0I0Z3RZcXZfaUxEQTlVQlloMlBwc1hSTm1GUkRXWlMxcDVxWmZVNlNjMGd0SXFhLTVMdWdXc3hWLTQ0UDRpRzA2bDNodWt1NFhlNDdnM3FWRmM2aVJCYnJBV2hudWtKa1YzZnFWRjNFREZKVWRvWVRfRGtEN0JyZkE2cktGek56a01vYUM5VUZWcElOY0VMUGpLLXVjQTVBdXNOdw?oc=5" target="_blank">Announcing agentic AI for healthcare patient engagement in Amazon Connect (Preview)</a> <font color="#6f6f6f">Amazon Web Services (AWS)</font>
Could not retrieve the full article text.
Read on GNews AI Amazon →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
servicereviewagentic![[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)
[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA
Hi everyone, I am from Australia : ) I just released a new research prototype It’s a lossless BF16 compression format that stores weights in 12 bits by replacing the 8-bit exponent with a 4-bit group code . For 99.97% of weights , decoding is just one integer ADD . Byte-aligned split storage: true 12-bit per weight, no 16-bit padding waste, and zero HBM read amplification. Yes 12 bit not 11 bit !! The main idea was not just “compress weights more”, but to make the format GPU-friendly enough to use directly during inference : sign + mantissa: exactly 1 byte per element group: two nibbles packed into exactly 1 byte too https://preview.redd.it/qbx94xeeo2tg1.png?width=1536 format=png auto=webp s=831da49f6b1729bd0a0e2d1f075786274e5a7398 1.33x smaller than BF16 Fixed-rate 12-bit per weight , no

The cognitive impact of coding agents
A fun thing about recording a podcast with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's one he shared on Twitter today which ended up attracting over 1.1m views! That was 48 seconds. Our full conversation lasted 1 hour 40 minutes. Tags: ai-ethics , coding-agents , agentic-engineering , generative-ai , podcast-appearances , ai , llms , cognitive-debt

Vulnerability Research Is Cooked
Vulnerability Research Is Cooked Thomas Ptacek's take on the sudden and enormous impact the latest frontier models are having on the field of vulnerability research. Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”. Why are agents so good at this? A combination of baked-in knowledge, pattern matching ability and brute force: You can't design a better problem for an LLM agent than exploitation research. Before you feed it a single token of context, a frontier LLM already en
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Self-Evolving AI

The cognitive impact of coding agents
A fun thing about recording a podcast with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's one he shared on Twitter today which ended up attracting over 1.1m views! That was 48 seconds. Our full conversation lasted 1 hour 40 minutes. Tags: ai-ethics , coding-agents , agentic-engineering , generative-ai , podcast-appearances , ai , llms , cognitive-debt

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!