Is AGI the right goal for AI? - Marcus on AI | Substack
<a href="https://news.google.com/rss/articles/CBMib0FVX3lxTE1xVnNxNDJhUHJtdkNoV2twdmN6Q1U0TVF0N0drOVZWaTU3Umc1anhjTXp0UVdrTmNlTE5ETW02UUIyZ0o5bm13REpsT0V3LTVOTGU1c2VaeURIRFRjb2ZwQ2FzQnR5QVV3c1FVNHRSQQ?oc=5" target="_blank">Is AGI the right goal for AI?</a> <font color="#6f6f6f">Marcus on AI | Substack</font>
Could not retrieve the full article text.
Read on GNews AI AGI →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Frontier Research
We Need Positive Visions of the Future
People don't want to talk about positive visions of the future, because it is not timely and because it's not the pressing problem. Preventing AI doom already seems so unlikely that caring about what happens in case we succeed feels meaningless. I agree that it seems very unlikely. But I think we still need to care about it, to some extent, even if only for psychological and strategic reasons. And I think this neglect is itself contributing to the very dynamics that make success less likely. The Desperation Engine Some people — or, arguably, many people — go to work on AI capabilities because they see it as kind of "the only hope." "So what now, if we pause AI?", they ask. The problem is that even with paused AI, the future looks grim. Institutional decay continues, aging continues, regula

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!