I asked ChatGPT if AI can be empathetic: The answer surprised me - PharmaLive
<a href="https://news.google.com/rss/articles/CBMilgFBVV95cUxOUXFNejdyOHdTbm5SdzBNbTVGcTg1ZURkUENfcGtqTVA4dTJaemJQZGV3M1hGR1ZFUV9tMmJjbGZVaVB2czVwaTRrZWNFQkNrbGhiQkNRWjEwY2F4LUtLSi12Y1lMbDlEWERNY2RkakVaQ3NKMVZhWS1CZjN1UTRzQVpmR0FWNFB4YzlwTURkUGwtSF92MkE?oc=5" target="_blank">I asked ChatGPT if AI can be empathetic: The answer surprised me</a> <font color="#6f6f6f">PharmaLive</font>
Could not retrieve the full article text.
Read on Google News: ChatGPT →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
chatgpt
Arcee AI releases Trinity-Large-Thinking, a 399B-parameter MoE AI model under an Apache 2.0 license, allowing full customization and commercial use (Carl Franzen/VentureBeat)
Carl Franzen / VentureBeat : Arcee AI releases Trinity-Large-Thinking, a 399B-parameter MoE AI model under an Apache 2.0 license, allowing full customization and commercial use The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

The Softmax Function Every Transformer Uses is the Boltzmann Distribution — Not Inspired by It, Not…
Your AI Runs on the Same Equation as a Steam Engine. Nobody Told You. And once you see it, you understand why LLMs hallucinate, why… Continue reading on Towards AI »

What If You Could Break Your API Design Before Writing a Single Line of Code?
I don’t write code. I’ve never written code. I direct AI coding agents — Claude Code, mostly — and they build what I describe. Over the last few months, I’ve been building a series of single-task AI agents, each one proving a different idea about how autonomous software should work. Agent 004 was a red team simulator. It attacked my own infrastructure from the outside — over HTTP, with its own identity, posting real collateral before every action. It ran 15 predefined attacks, then learned to adapt its strategy across rounds, then started writing its own novel attack code and executing it in a sandboxed child process. By the time it was done, it had thrown more than a hundred adversarial scenarios at the system and, in the tested runs, surfaced no exploitable paths. The sandbox it used — f

Why LLM Inference Slows Down with Longer Contexts
A systems-level view of how long contexts shift LLM inference from compute-bound to memory-bound You send a prompt to an LLM, and at first everything feels fast. Short prompts return almost instantly, and even moderately long inputs do not seem to cause any noticeable delay. The system appears stable, predictable, almost indifferent to the amount of text you provide. But this does not scale the way you might expect. As the prompt grows longer, latency does increase. But more importantly, the system itself starts behaving differently. What makes this interesting is that nothing external has changed. The model and hardware is same. But the workload is not. As sequence length grows, the way computation is structured changes. The amount of data the model needs to access changes. And the balanc



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!