Tech Talk: iFlytek enters China's AI language model price war - gdnonline.com
<a href="https://news.google.com/rss/articles/CBMilgFBVV95cUxPeUlwb1RJQ2dhdEJscGdVVlh3WWd3OHF4WlE0TTBLMlAwOGxOeTgwMlg1MkZuQkJXMDl5Zm1QUW5xaWUtZzNHbDlpS0t5b1haeXNyZ2x2UUE3LVZUTFZKODI2eHlMZ21TMFdOeDFFTEs2TVZqX2NYQmVvcGNldEQtcVdBRVBtS29IdHdqblJva2dxaE1XOFE?oc=5" target="_blank">Tech Talk: iFlytek enters China's AI language model price war</a> <font color="#6f6f6f">gdnonline.com</font>
Could not retrieve the full article text.
Read on Google News - iFlytek AI Spark →Google News - iFlytek AI Spark
https://news.google.com/rss/articles/CBMilgFBVV95cUxPeUlwb1RJQ2dhdEJscGdVVlh3WWd3OHF4WlE0TTBLMlAwOGxOeTgwMlg1MkZuQkJXMDl5Zm1QUW5xaWUtZzNHbDlpS0t5b1haeXNyZ2x2UUE3LVZUTFZKODI2eHlMZ21TMFdOeDFFTEs2TVZqX2NYQmVvcGNldEQtcVdBRVBtS29IdHdqblJva2dxaE1XOFE?oc=5Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modelchina
Couple convicted of stealing trade secrets for China loses US citizenship
A federal judge has revoked the US citizenship of a naturalised married couple from China after their 2021 convictions for stealing sensitive medical trade secrets and sharing them with China, the US Department of Justice (DoJ) announced on Tuesday. On March 30, federal Judge James E. Simmons Jr. of California’s Southern District ordered the denaturalisation of Li Chen and Yu Zhou, ruling their crimes showed a lack of the “good moral character” required for American citizenship. Chen and Zhou...

China’s science awards system is plagued by shadowy practices. Can reforms fix it?
China’s science and technology awards system has been accused of being riddled with loopholes and misconduct, including serious exaggeration of achievements, cultivation of personal connections and even bribery, according to critics within the academic community. These flaws, though repeatedly addressed by the authorities, are said to remain deeply entrenched, casting a shadow over China’s rapidly advancing innovation sector that is widely regarded as a key pillar in its rivalry with the...
Training mRNA Language Models Across 25 Species for $165
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s912ur/training_mrna_language_models_across_25_species/"> <img src="https://external-preview.redd.it/nVu8HcAwsZ22nurNc_vAW-R_2drotHTdQavTOwGgmi4.png?width=640&crop=smart&auto=webp&s=00c766290f7df67dcd9c20ff789b8f25a14b886b" alt="Training mRNA Language Models Across 25 Species for $165" title="Training mRNA Language Models Across 25 Species for $165" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Training mRNA Language Models Across 25 Species for $165
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s912ur/training_mrna_language_models_across_25_species/"> <img src="https://external-preview.redd.it/nVu8HcAwsZ22nurNc_vAW-R_2drotHTdQavTOwGgmi4.png?width=640&crop=smart&auto=webp&s=00c766290f7df67dcd9c20ff789b8f25a14b886b" alt="Training mRNA Language Models Across 25 Species for $165" title="Training mRNA Language Models Across 25 Species for $165" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0
You guys seen this? 1-bit model with an MMLU-R of 65.7, 8B params
<!-- SC_OFF --><div class="md"><p>This is nuts.</p> <p><a href="https://huggingface.co/prism-ml/Bonsai-8B-gguf">prism-ml/Bonsai-8B-gguf · Hugging Face</a></p> <p>has anyone tested this thing?</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/OmarBessa"> /u/OmarBessa </a> <br/> <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[link]</a></span>   <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[comments]</a></span>
[P] Looking for people who have had training runs fail unexpectedly to beta test a stability monitor. Free, takes 5 minutes to add to your existing loop. DM me.
<!-- SC_OFF --><div class="md"><p>Anyone actively training models want to try a stability monitor on a real run? Trying to get real world validation outside my own benchmarks.</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/Turbulent-Tap6723"> /u/Turbulent-Tap6723 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/1s93kzm/p_looking_for_people_who_have_had_training_runs/">[link]</a></span>   <span><a href="https://www.reddit.com/r/MachineLearning/comments/1s93kzm/p_looking_for_people_who_have_had_training_runs/">[comments]</a></span>
FOR ME, Qwen3.5-27B is better than Gemini 3.1 Pro and GPT-5.3 Codex
<!-- SC_OFF --><div class="md"><p>There's something I hate about the big SOTA proprietary models. In order to make them better for people who don't know how to program, they're optimized to solve problems entirely autonomously. Yeah, this makes people over on <a href="/r/ChatGPT">/r/ChatGPT</a> soypog when it writes a 7z parser in Python because the binary is missing, however, for me, this makes them suck. If something isn't matching up, Qwen3.5-27B will just give up. If you're trying to vibecode some slop this is annoying, but for me this is much, much better. I'm forced to use GitHub Copilot in university, and whenever there's a problem, it goes completely off the rails and does some absolute hogwash. Like, for example, it was struggling to write to a file that had some broken permission
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!