China’s science awards system is plagued by shadowy practices. Can reforms fix it?
China’s science and technology awards system has been accused of being riddled with loopholes and misconduct, including serious exaggeration of achievements, cultivation of personal connections and even bribery, according to critics within the academic community. These flaws, though repeatedly addressed by the authorities, are said to remain deeply entrenched, casting a shadow over China’s rapidly advancing innovation sector that is widely regarded as a key pillar in its rivalry with the...
China’s science and technology awards system has been accused of being riddled with loopholes and misconduct, including serious exaggeration of achievements, cultivation of personal connections and even bribery, according to critics within the academic community.
These flaws, though repeatedly addressed by the authorities, are said to remain deeply entrenched, casting a shadow over China’s rapidly advancing innovation sector that is widely regarded as a key pillar in its rivalry with the West.
Last year, the China Association for Science and Technology issued an announcement stating that it was rescinding the honours of five award recipients, including the China Youth Science and Technology Award, revoking their medals and certificates and demanding the return of prize money due to their disciplinary and legal violations or research misconduct.
Among them was Liu Jianni, a professor of palaeontology at Northwest University and recipient of the China Young Women in Science Award in 2014.
But a decade later, she was publicly named for engaging in improper solicitation and other unfair practices during the review of national grant projects.
According to well-placed sources within Chinese academic circles, award misconduct is not uncommon.
A professor of agriculture at a public university in southwestern China, who requested anonymity due to the sensitivity of the issue, described the awards system as “one of the most corrupt links” in the country’s scientific ecosystem.
SCMP Tech (Asia AI)
https://www.scmp.com/news/china/science/article/3345080/chinas-science-awards-system-plagued-shadowy-practices-can-reforms-fix-it?utm_source=rss_feedSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Claude Code subagent patterns: how to break big tasks into bounded scopes
Claude Code Subagent Patterns: How to Break Big Tasks into Bounded Scopes If you've ever given Claude Code a massive task — "refactor the entire auth system" — and watched it spiral into confusion after 20 minutes, you've hit the core problem: unbounded scope kills context . The solution is subagent patterns: structured ways to decompose work into bounded, parallelizable units. Why Big Tasks Fail in Claude Code Claude Code has a finite context window. When you give it a large task: It reads lots of files → context fills up It loses track of what it read first It starts making contradictory changes You hit the context limit mid-task The session crashes and you lose progress The fix isn't a bigger context window — it's smaller tasks. The Subagent Pattern Instead of one Claude session doing e

I Started Building a Roguelike RPG — Powered by On-Device AI #2
Running On-Device LLM in Unity Android — Everything That Broke (and How I Fixed It) In my last post, I mentioned I was building a roguelike RPG powered by an on-device LLM. This time I'll cover exactly how I did it, what broke, and what the numbers look like. The short version: I got Phi-4-mini running in Unity on a real Android device in one day. It generated valid JSON. It took 8 minutes and 43 seconds. 0. Why This Tech Stack Before the details, here's why I made each choice. Why Phi-4-mini (3.8B)? Microsoft officially distributes it in ONNX format — no conversion work needed. The INT4 quantized version fits in 4.9GB, which is manageable on a 12GB RAM device. At 3.8B parameters, it's roughly the minimum size that can reliably produce structured JSON output. Smaller models tend to fall ap






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!