Hong Kong-listed CaoCao hails fleet-first strategy as China’s robotaxi race gathers pace
Chinese ride-hailing company CaoCao, backed by Geely, is betting on a heavy-asset strategy to emerge as a leading robotaxi operator, with plans to deploy 100,000 autonomous vehicles by 2030 as competition intensifies and self-driving technology matures. In an interview with the South China Morning Post, CEO Gong Xin said the future of robotaxis hinged on an asset-management model built around a closed-loop “trinity” of vehicle manufacturing, autonomous driving technology and fleet...
Chinese ride-hailing company CaoCao, backed by Geely, is betting on a heavy-asset strategy to emerge as a leading robotaxi operator, with plans to deploy 100,000 autonomous vehicles by 2030 as competition intensifies and self-driving technology matures.
In an interview with the South China Morning Post, CEO Gong Xin said the future of robotaxis hinged on an asset-management model built around a closed-loop “trinity” of vehicle manufacturing, autonomous driving technology and fleet operations.
The Hong Kong-listed firm is refining its level 4 autonomous system, with an initial fleet of 100 robotaxis launched in Hangzhou in late 2025. While most vehicles in China still require a human safety monitor, CaoCao is targeting fully driverless operations this year.
“Many local governments in China are highly supportive of L4-related applications, so I believe the technology is approaching a critical inflection point,” Gong said. On April 1, the company received approval to conduct unmanned road tests in Hangzhou, becoming the first to do so in the city.
The push comes amid intensifying competition with domestic rivals such as Pony.ai and WeRide, as the sector edges closer to large-scale commercialisation.
Central to CaoCao’s strategy is a “fully purpose-built robotaxi” developed over the past two years, designed from the ground up for autonomous driving with tightly integrated software.
The vehicles are expected to debut this year and enter mass production in the first half of 2027.
SCMP Tech (Asia AI)
https://www.scmp.com/tech/article/3348985/hong-kong-listed-caocao-hails-fleet-first-strategy-chinas-robotaxi-race-gathers-pace?utm_source=rss_feedSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelcompanychina
I Let AI Agents Into My Codebase. Here's What Actually Broke (And How I Fixed It)
Once harness stops being only a concept and begins to take structural form, the engineering worksite cannot remain unchanged. How the repository is organized, how architecture draws boundaries, how review is layered, and how default paths are designed—questions that once looked like engineering governance or team habit suddenly move to the center. Once agents truly enter the workflow, the question facing software teams is no longer only how code should be written, but how the worksite itself should be written. This part is concerned not with whether agents can write code, but with how repositories, architecture, review, merge strategy, and slop governance change as a result. See Figures 3-1 through 3-7 in this part. Figure 3-1. How the repository becomes the agent's operating system graph

Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models


Token Budgets for Real Projects: How I Keep AI Costs Under $50/Month
AI coding assistants are useful. They're also expensive if you're not paying attention. I was spending $120/month before I started tracking. Now I spend under $50 for the same (honestly, better) output. Here's the system. The Problem: Invisible Costs Most developers don't track AI token usage. They paste code, get results, paste more code. Each interaction costs money, but the feedback loop is delayed — you see the bill at the end of the month. The biggest cost drivers aren't the prompts. They're the context. A typical AI coding session: System prompt: ~500 tokens Your context (project files, examples): ~2,000-8,000 tokens Your actual question: ~200 tokens AI response: ~500-2,000 tokens That context window is 80% of your bill. And most of it is the same information you send every time. The

AI Agents vs Traditional Automation: When to Use Each
The Hype vs Reality AI agents are everywhere in 2025. But deploying an LLM for every automation task is like using a jackhammer to hang a picture frame. Understanding where agents excel—and where they don't—is the difference between building useful software and chasing trends. What Makes Something an "AI Agent" An agent: Has access to tools (functions it can call) Decides which tools to use and when Iterates until a goal is achieved import Anthropic from ' @anthropic-ai/sdk ' ; const anthropic = new Anthropic (); const tools : Anthropic . Tool [] = [ { name : ' search_codebase ' , description : ' Search for code patterns in the repository ' , input_schema : { type : ' object ' , properties : { query : { type : ' string ' } }, required : [ ' query ' ], }, }, { name : ' run_tests ' , descrip

Your AI Agent Has a Shopping Problem. Here's the Intervention.
Your AI agent just mass-purchased 200 API keys because "it seemed efficient." Your AI agent subscribed to 14 SaaS tools at 3 AM because "the workflow required comprehensive coverage." Your AI agent tipped a cloud provider 40% because no one said it couldn't. These aren't hypotheticals. As AI agents get access to real budgets, "oops" becomes an expensive word. And if your current spending control strategy is "I put it in the system prompt" — congratulations, that's the AI equivalent of asking a teenager to please not use your credit card. This is not about token costs Let's get one thing straight. There are tools that track how much your agent spends on API calls — tokens consumed, model costs, LLM budget caps. MarginDash, AgentBudget, TokenFence — they solve a real problem: "my agent burne



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!