Shares of China AI 'tiger' Zhipu surge 35% after revenue doubles in first earnings report - CNBC
<a href="https://news.google.com/rss/articles/CBMiqwFBVV95cUxOYm90RjlnNTZScDJVRkxoaEdGQUlZWXpvRUlpT29NVm1qY2RaZEVKRjFWekNzQVBaX0htVkFWR3NxUWRiUE5vSTJTRXFpMFUxTE1QTVI3WTI3ajNTZ2lqOUFaR2JlRVQyV2wyMzR3Yy1qbkt5VExWcnNrWTZ1RkNsa1l6UWZfbW1kZnpLdjhGTXN6VjRObHZBeUh2ZGptekFHYzAzTFY3cjBUN1nSAbABQVVfeXFMTUZzazJnalBIX3A4N0xCeTM2X1JhVGVYYzQtXzdra3RoaTkxN185ZDkzaEwwOE85a1gyODh5TEprR2NXU1Q5WlZhTlZVeVlSRnIybzZXanlCQXcxX3ktQXU4bi1DRmN4dVlhVU9jMVFGcnlOTWNBNHMwVnI3enF1OWIyTlVjN2FfX2swZ2xSM0p0eVd6bGdWVTBZY3NOZk5rQV9ZejUyVHN4WVp6eUdnNGI?oc=5" target="_blank">Shares of China AI 'tiger' Zhipu surge 35% after revenue doubles in first earnings report</a> <font color="#6f6f6f">CNBC</font>
Could not retrieve the full article text.
Read on GNews AI China →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
revenuereportchinaQuant Factor Research in Practice: IC, IR, and the Barra Multi-Factor Model
Why Does Your Backtest Look Great But Lose Money Live? Classic quant beginner story: find an "interesting" indicator → backtest → great results → trade live → lose money. Why? Because "looks effective" and "statistically significant" are two very different things. Quantitative factor research has a rigorous evaluation framework: IC, IR, and Barra risk neutralization. Skip this and your backtest is just mining noise. Part 1: IC — The Measuring Stick for Factor Effectiveness IC (Information Coefficient) = correlation between current-period factor exposure and next-period stock returns. Use RankIC (Spearman) over Pearson IC in practice—it's more robust to outliers. import scipy.stats as stats def calc_rank_ic ( factor_series , return_series ): rank_factor = factor_series . rank () rank_return
When You Push for 3x
I was at a lunch table with my boss and most of my team, telling the story of how we'd doubled our velocity. I was proud of the number. I told it like it was a win. My team was sitting right there, listening to me describe what they'd done to make that number. They knew. I didn't yet. That's the part I don't tell in the short version. Not just that velocity got gamed, but that I was the one carrying the number into rooms and setting it on the table like a trophy. Every time I did that, I taught my team something about what I was rewarding. Death by a thousand paper cuts Velocity doesn't collapse in one visible moment. There's no incident report. No postmortem. It erodes the way a codebase quietly deteriorates when nobody's watching the right signals. Tickets start getting larger. Not more
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Financial News Sentiment Analyzer
Bloomberg Terminal costs $24,240/year. I built something that does the sentiment analysis part for $0.02 per article. It's not Bloomberg. It doesn't do terminal access, charting, or messaging. But if all you need is "what's the market saying about NVDA right now?", it gets you 90% of the way for 0.001% of the cost. Why I Built This I was building a trading signal pipeline and needed financial sentiment data. My options were: Pay Bloomberg $24K/year (lol no) Use Finnhub's sentiment endpoint ($500/year, but the sentiment is basic. Just positive/negative with no confidence score) Scrape it myself I went with option 3. Spent a few weeks scraping Yahoo Finance, Google News, and SEC EDGAR, wiring up VADER for quick sentiment, then an LLM for the headlines VADER gets wrong. Then I realized other
How I Made Claude Actually Reliable at Math (5-Minute Setup)
I spent a week watching Claude confidently give me wrong answers. Not wrong opinions — wrong numbers . TDEE calculations off by 200 calories. Mortgage amortization that didn't add up. Compound interest that was close-ish but not quite right. The thing is, Claude sounds confident when it hallucinates math. It walks you through the reasoning, uses the right formula names, and arrives at a number that feels plausible. The problem only shows up when you check the work. This is a known issue with LLMs. They don't actually "do math" — they pattern-match from training data. Arithmetic is surprisingly unreliable, especially for multi-step calculations. Here's how I fixed it. The Problem: LLMs Are Not Calculators When you ask Claude to calculate your TDEE (Total Daily Energy Expenditure), it might

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!