What’s the right path for AI?
Hi there, little explorer! 🚀
Imagine we have a super smart robot friend called AI. Some grown-ups were talking about how our robot friend should learn.
One grown-up, Karen, said: "Let's not make our robot friend eat all the books in the world! 📚 That's too much food and makes a big mess!" She thinks our robot friend can be super helpful by learning just a little bit, like how a tiny chef only needs to learn to bake one yummy cookie. 🍪
Another grown-up, Paola, said: "Our robot friend should always help people in good ways, like a kind helper!" 🤝
So, they want our robot friend to be smart and kind, not just big and messy! Isn't that a fun idea? ✨
Conference speakers discussed the unfolding trajectory of AI and the benefits of shaping technology to meet people’s needs.
Who benefits from artificial intelligence? This basic question, which has been especially salient during the AI surge of the last few years, was front and center at a conference at MIT on Wednesday, as speakers and audience members grappled with the many dimensions of AI’s impact.
In one of the conferences’s keynote talks, journalist Karen Hao ’15 called for an altered trajectory of AI development, including a move away from the massive scale-up of data use, data centers, and models being used to develop tools under the rubric of “artificial general intelligence.”
“This scale is unnecessary,” said Hao, who has become a prominent voice in AI discussions. “You do not need this scale of AI and compute to realize the benefits.” Indeed, she added, “If we really want AI to be broadly beneficial, we urgently need to shift away from this approach.”
Hao is a former staff member at The Wall Street Journal and MIT Technology Review, and author of the 2025 book, “Empire of AI.” She has reported extensively on the growth of the AI industry.
In her remarks, Hao outlined the astonishing size of datasets now being used by the biggest AI firms to develop large language models. She also emphasized some of the tradeoffs in this scale-up, such as the massive energy consumption and emissions of hyper-scale data centers, which also consume large amounts of water. Drawing on her own reporting, Hao also noted the human toll from the input work that global gig-economy employees do, inputting data manually for the hyper-scale models.
By contrast, Hao offered, an alternate path for AI might exist in the example of AlphaFold, the Nobel Prize-winning tool used to identify protein structures. This represents the concept of the “small, task-specific AI model tackling a well-scoped problem that lends itself to the computational strengths of AI,” Hao said.
She added: “It’s trained on highly curated data sets that only have to do with the problem at hand: protein folding and amino acid sequences. … There’s no need for fast supercomputing because the datasets are small, the model is small, and it’s still unlocking enormous benefit.”
In a second keynote address, scholar Paola Ricaurte underscored the desirability of purpose-driven AI approaches, outlining a number of conceptual keys to evaluating the usefulness of AI.
“There is no sense in having technologies that are not going to respond to the communities that are going to use them,” said Ricaurte.
She is a professor at Tecnologico de Monterrey in Mexico and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. Ricaurte has also served on expert committees such as the Global Partnership for AI, UNESCO’s AI Ethics Experts Without Borders, and the Women for Ethical AI project.
The event was hosted by the MIT Program in Women’s and Gender Studies. Manduhai Buyandelger, the program’s director and a professor of anthropology, provided introductory remarks.
Titled “Gender, Empire, and AI: Symposium and Design Workshop,” the event was held in the conference space at the MIT Schwarzman College of Computing, with over 300 people in attendance for the keynote talks. There was also a segment of the event devoted to discussion groups, and an afternoon session on design, in a half-dozen different subject areas.
In her talk, Hao decried the often-vague nature of AI discourse, suggesting it impedes a more thoughtful discussion about the industry’s direction.
“Part of the challenge in talking about AI is the complete lack of specificity in the term ‘artificial intelligence,’” Hao said. “It’s like the word ‘transportation.’ You could be referring to anything from a bicycle to a rocket.” As a result, she said, “when we talk about accessing its benefits, we actually have to be very specific. Which AI technologies are we talking about, and which ones do we want more of?”
In her view, the smaller-sized tools — more akin to the bicycle, by analogy — are more useful on an everyday basis. As another example, Hao mentioned the project Climate Change AI, focused on tools that can help improve the energy efficiency of buildings, track emissions, optimize supply chains, forecast extreme weather, and more.
“This is the vision of AI that we should be building towards,” Hao said.
In conclusion, Hao encouraged audience members to be active participants in AI-related discourse and projects, saying the trajectory of the technology was not yet fixed, and that public interventions matter.
Citing the writer Rebecca Solnit, Hao suggested to the audience that “Hope locates itself in the premise that we don’t know what will happen, and that in the spaciousness of uncertainty is room to act.” She also noted, “Each and every one of you has an active role to play in shaping technology development.”
Ricaurte, similarly, encouraged attendees to be proactive participants in AI matters, noting that technologies will work best when the pressing everyday needs of all citizens are addressed.
“We have the responsibility to make hope possible,” Ricaurte said.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
conference![[R] deadlines for main conferences](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-matrix-rain-CvjLrWJiXfamUnvj5xT9J9.webp)

Oil climbs anew on mixed signals about Iran war's future
Brent crude oil climbed more than 1% to above $110 per barrel when markets opened Sunday amid mixed signals about the Iran war that's creating unprecedented disruption to global energy flows. Why it matters: President Trump is signaling major escalation, but also told Axios' Barak Ravid that the U.S. is in "deep negotiations" with Iran. Trump is threatening to bomb Iran's power plants and bridges starting Tuesday if the regime doesn't open the Strait of Hormuz. The big picture: Markets are also responding to crosscurrents about the Strait of Hormuz and regional infrastructure. Iranian officials say they will exempt Iraq from restrictions and a tanker carrying the nations' crude has reportedly transited the waterway. But specifics and conditions for Iraqi crude remain unclear, per Bloomberg
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Simple parallel estimation of the partition ratio for Gibbs distributions
arXiv:2505.18324v2 Announce Type: replace-cross Abstract: We consider the problem of estimating the partition function $Z(\beta)=\sum_x \exp(\beta(H(x))$ of a Gibbs distribution with the Hamiltonian $H:\Omega\rightarrow\{0\}\cup[1,n]$. As shown in [Harris & Kolmogorov 2024], the log-ratio $q=\ln (Z(\beta_{\max})/Z(\beta_{\min}))$ can be estimated with accuracy $\epsilon$ using $O(\frac{q \log n}{\epsilon^2})$ calls to an oracle that produces a sample from the Gibbs distribution for parameter $\beta\in[\beta_{\min},\beta_{\max}]$. That algorithm is inherently sequential, or {\em adaptive}: the queried values of $\beta$ depend on previous samples. Recently, [Liu, Yin & Zhang 2024] developed a non-adaptive version that needs $O( q (\log^2 n) (\log q + \log \log n + \epsilon^{-2}) )$ samples.

Near-Optimal Space Lower Bounds for Streaming CSPs
arXiv:2604.01400v1 Announce Type: cross Abstract: In a streaming constraint satisfaction problem (streaming CSP), a $p$-pass algorithm receives the constraints of an instance sequentially, making $p$ passes over the input in a fixed order, with the goal of approximating the maximum fraction of satisfiable constraints. We show near optimal space lower bounds for streaming CSPs, improving upon prior works. (1) Fei, Minzer and Wang (\textit{STOC 2026}) showed that for any CSP, the basic linear program defines a threshold $\alpha_{\mathrm{LP}}\in [0,1]$ such that, for any $\varepsilon > 0$, an $(\alpha_{\mathrm{LP}} - \varepsilon)$-approximation can be achieved using constant passes and polylogarithmic space, whereas achieving $(\alpha_{\mathrm{LP}} + \varepsilon)$-approximation requires $\Ome

Polynomial-Time Almost Log-Space Tree Evaluation by Catalytic Pebbling
arXiv:2604.02606v1 Announce Type: cross Abstract: The Tree Evaluation Problem ($\mathsf{TreeEval}$) is a computational problem originally proposed as a candidate to prove a separation between complexity classes $\mathsf{P}$ and $\mathsf{L}$. Recently, this problem has gained significant attention after Cook and Mertz (STOC 2024) showed that $\mathsf{TreeEval}$ can be solved using $O(\log n\log\log n)$ bits of space. Their algorithm, despite getting very close to showing $\mathsf{TreeEval} \in \mathsf{L}$, falls short, and in particular, it does not run in polynomial time. In this work, we present the first polynomial-time, almost logarithmic-space algorithm for $\mathsf{TreeEval}$. For any $\varepsilon>0$, our algorithm solves $\mathsf{TreeEval}$ in time $\mathrm{poly}(n)$ while using $O(\

A Lower Bound for Grothendieck's Constant
arXiv:2603.22616v1 Announce Type: cross Abstract: We show that Grothendieck's real constant $K_{G}$ satisfies $K_G\geq c+10^{-26}$, improving on the lower bound of $c=1.676956674215576\ldots$ of Davie and Reeds from 1984 and 1991, respectively.


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!