OpenAI Now Valued at $852B After New Funding Round
The round solidifies the ChatGPT maker's position as one of the world's most valuable private companies.
OpenAI CEO Sam AltmanGetty Images
The generative AI vendor closed its latest funding round, raising $122 billion at a valuation of $852 billion.
The final $122 billion figure, OpenAI's largest fundraising round yet, is up from the $110 billion originally revealed in February.
However, it may not hold that status for much longer, with the firm widely expected to launch an IPO later this year.
OpenAI confirmed the latest investment in a statement on the OpenAI website, in which the vendor claimed it was becoming “the core infrastructure for AI".
The bulk of the funding comes from OpenAI’s strategic partners, Amazon ($50 billion), Nvidia ($30 billion) and SoftBank ($30 billion), with the Japanese investment holding company leading the round.
Other significant backers include Andreesen Horowitz, Abu Dhabi’s MGX, TPG and D.E. Shaw Ventures and continued participation by long-term partner Microsoft, despite the recent loosening of ties between Microsoft and OpenAI.
Related:Microsoft Commits $1B to Thailand's AI future
In addition, for the first time, OpenAI extended participation to investors through bank channels, raising more than $3 billion from individual investors. The vendor also said it would be included in exchange-traded funds managed by ARK Invest, further broadening ownership, and extended its revolving credit facility to $4.7 billion.
The vast funding package comes amid a landscape marked by widespread concern about OpenAI’s ability to generate sufficient revenue to justify its huge spending on building out AI infrastructure.
CFO Sarah Friar has been direct in describing the need to bring money in, particularly in the wake of CEO Sam Altman’s acknowledgment that the AI lab is “looking at [spending] commitments of about $1.4 trillion over the next 8 years."
The continuing skepticism about OpenAI’s future perhaps explains the bullish tone of the statement released to confirm the closing of the latest funding round: “At this stage, we are growing revenue four times faster than the companies who defined the Internet and mobile eras, including Alphabet and Meta.”
To back up this claim, the vendor pointed out that OpenAI was the fastest tech platform to reach 10 million and then 100 million users and said it would soon be the quickest to reach 1 billion weekly active users.
OpenAI also said it is now generating $2.6 billion in revenue monthly.
On the consumer side, ChatGPT now has more than 900 million weekly active users and more than 50 million subscribers.
OpenAI, meanwhile, said its advertising pilot achieved more than $100 million in annual recurring revenue in six weeks.
Related:Nebius to Build One of Europe’s Largest AI Factories in Finland
The vendor also said its enterprise business is growing too, now contributing more than 40% of its revenue, and on track to reach parity with consumer revenue by the end of 2026.
OpenAI also provided a signpost to its future ambitions, reiterating its goal to create a unified AI "superapp" that will combine ChatGPT, Codex and browsing and other agentic capabilities.
About the Author
Contributing Writer
Graham Hope has worked in automotive journalism in the U.K. for 26 years, including spells as editor of leading consumer news website and weekly Auto Express and respected buying guide CarBuyer.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
fundingchatgpt
Я потратил месяц на AI-инструменты и удалил половину из них
В пятницу 14 февраля в 23:40 я сидел за ноутом, дожимая дедлайн на проекте за $2300 . Copilot вдруг подсунул мне "оптимизацию", которая так ловко сломала авторизацию сразу в трёх местах. Следующие четыре часа я чинил то, что за 11 секунд превратилось в кашу. Наутро я понял: из моих 14 AI-инструментов реально работали только три. Инструментальная перегрузка Когда я впервые начал работать с AI-инструментами, казалось, что это будет настоящим спасением. Меньше рутинной работы, больше времени на творчество. Но вскоре стало ясно, что эта иллюзия начала трескаться. Каждый инструмент считал своим долгом вмешиваться в код, предлагать "улучшения", которые на деле оборачивались дополнительной работой. К тому же, постоянно переключаться между ними было просто невыносимо. Вроде бы они должны экономить
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Agentic Link Construction for Environment and Intent Aware 6G Communication
arXiv:2511.05094v2 Announce Type: replace Abstract: The emergence of sixth-generation networks heralds an intelligent communication ecosystem driven by the rapid proliferation of intelligent services and increasingly complex communication scenarios. However, current physical-layer designs-typically following modular and isolated optimization paradigms-fail to achieve global end-to-end optimality due to neglected inter-module dependencies. Although large language models (LLMs) have recently been applied to communication tasks such as beam prediction and resource allocation, existing studies remain limited to single-task or single-modality scenarios and lack the ability to jointly reason over communication states and user intents for personalized strategy adaptation. To address these limitat

Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
arXiv:2603.24591v2 Announce Type: replace Abstract: While large language models (LLMs) have accelerated 2D software development through intent-driven "vibe coding", prototyping intelligent Extended Reality (XR) experiences remains a major challenge. The fundamental barrier is not just the steep learning curve for human creators, but that low-level sensor APIs and complex game engine hierarchies are ill-suited for LLM reasoning, routinely exceeding context windows and inducing syntax hallucinations. To bridge this gap, we contribute XR Blocks, an open-source, LLM-native WebXR framework. Unlike traditional engines, XR Blocks introduces a semantic "Reality Model" that aligns spatial computing primitives (users, physical environments, and agents) with natural language, providing a robust, conc

CASCADE: A Cascading Architecture for Social Coordination with Controllable Emergence at Low Cost
arXiv:2604.03091v1 Announce Type: new Abstract: Creating scalable and believable game societies requires balancing authorial control with computational cost. Existing scripted NPC systems scale efficiently but are often rigid, whereas fully LLM-driven agents can produce richer social behavior at a much higher runtime cost. We present CASCADE, a three-layer architecture for low-cost, controllable social coordination in sandbox-style game worlds. A Macro State Director (Level 1) maintains discrete-time world-state variables and macro-level causal updates, while a modular Coordination Hub decomposes state changes through domain-specific components (e.g., professional and social coordination) and routes the resulting directives to tag-defined groups. Then Tag-Driven NPCs (Level 3) execute resp

Same Feedback, Different Source: How AI vs. Human Feedback Attribution and Credibility Shape Learner Behavior in Computing Education
arXiv:2604.03075v1 Announce Type: new Abstract: As AI systems increasingly take on instructional roles - providing feedback, guiding practice, evaluating work - a fundamental question emerges: does it matter to learners who they believe is on the other side? We investigated this using a three-condition experiment (N=148) in which participants completed a creative coding tutorial and received feedback generated by the same large language model, attributed to either an AI system (with instant or delayed delivery) or a human teaching assistant (with matched delayed delivery). This three-condition design separates the effect of source attribution from the confound of delivery timing, which prior studies have not controlled. Source attribution and timing had distinct effects on different outcom


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!