Former Meta AI Pioneer Yann LeCun Raises Over $1 Billion for New Startup - WSJ
<a href="https://news.google.com/rss/articles/CBMijgNBVV95cUxOTmRTdWhxZWFnM2QwQzIwWE9LN2sweW9NYXEyd01Pc25HTWpFSnVydkxMUGk5NFo0dWduUHRDV09MWVFWQzhXRHNtVUVWOWJtSGxzY2c0aGJMNXlKUjdicy1FcTgzazNXTVlidXpTOGdsYk5QZnlYRXBYYkV0MXhWTUQ3TmNTSEItOFdiRlRqYzlxMVBWQTQ0YmY4RjBZaW9ZdWtFTkR3VnIwa1hCdzFhbVpySFBOWHVnQzNURTFXeHVlU1JtUUFFdy1kbVU5bU55eGJNMlN5Zk90VERlOFplaWFoRnFjS0VRXzNzcG9qallRRlhzOGU1OVBQOE9HUS1DWldROGpTazZ6eUoybDJqSzNGNFI2cHpEVEtjeXNLQzE1ZnhhcFBoSHh6UTlzOXo3MVV6d2NZcHM3Q1FFZlFhVjZaaThCYVBoMVpnUkt2S1h3dV9PTXJqaTh4LVNXRFUxUEd6a3ptRks4Unozemx3RF9NNnB4dklUQzktd1pyTHlhd1A3NVRKVkZxT0I1QQ?oc=5" target="_blank">Former Meta AI Pioneer Yann LeCun Raises Over $1 Billion for New Startup</a> <font color="#6f6f6f">WSJ</font>
Could not retrieve the full article text.
Read on GNews AI startups →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
startupbillion
Asprofin Bank Partners with RRP Electronics as Tier One Contractor for Multi Billion Datacenter Network
Three industry leaders—Asprofin Bank Corporation, Wow Global Technologies W.L.L., and RRP Electronics Limited—have formalized their commitment with a tripartite memorandum of understanding to construct a cutting-edge modular and hyperscale data center network across Qatar, India, and Southeast Asia.

Revealing the Learning Dynamics of Long-Context Continual Pre-training
arXiv:2604.02650v1 Announce Type: new Abstract: Existing studies on Long-Context Continual Pre-training (LCCP) mainly focus on small-scale models and limited data regimes (tens of billions of tokens). We argue that directly migrating these small-scale settings to industrial-grade models risks insufficient adaptation and premature training termination. Furthermore, current evaluation methods rely heavily on downstream benchmarks (e.g., Needle-in-a-Haystack), which often fail to reflect the intrinsic convergence state and can lead to "deceptive saturation". In this paper, we present the first systematic investigation of LCCP learning dynamics using the industrial-grade Hunyuan-A13B (80B total parameters), tracking its evolution across a 200B-token training trajectory. Specifically, we propos
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Evaluating the Formal Reasoning Capabilities of Large Language Models through Chomsky Hierarchy
arXiv:2604.02709v1 Announce Type: new Abstract: The formal reasoning capabilities of LLMs are crucial for advancing automated software engineering. However, existing benchmarks for LLMs lack systematic evaluation based on computation and complexity, leaving a critical gap in understanding their formal reasoning capabilities. Therefore, it is still unknown whether SOTA LLMs can grasp the structured, hierarchical complexity of formal languages as defined by Computation Theory. To address this, we introduce ChomskyBench, a benchmark for systematically evaluating LLMs through the lens of Chomsky Hierarchy. Unlike prior work that uses vectorized classification for neural networks, ChomskyBench is the first to combine full Chomsky Hierarchy coverage, process-trace evaluation via natural language

Trivial Vocabulary Bans Improve LLM Reasoning More Than Deep Linguistic Constraints
arXiv:2604.02699v1 Announce Type: new Abstract: A previous study reported that E-Prime (English without the verb "to be") selectively altered reasoning in language models, with cross-model correlations suggesting a structural signature tied to which vocabulary was removed. I designed a replication with active controls to test the proposed mechanism: cognitive restructuring through specific vocabulary-cognition mappings. The experiment tested five conditions (unconstrained control, E-Prime, No-Have, elaborated metacognitive prompt, neutral filler-word ban) across six models and seven reasoning tasks (N=15,600 trials, 11,919 after compliance filtering). Every prediction from the cognitive restructuring hypothesis was disconfirmed. All four treatments outperformed the control (83.0%), includi

Redirected, Not Removed: Task-Dependent Stereotyping Reveals the Limits of LLM Alignments
arXiv:2604.02669v1 Announce Type: new Abstract: How biased is a language model? The answer depends on how you ask. A model that refuses to choose between castes for a leadership role will, in a fill-in-the-blank task, reliably associate upper castes with purity and lower castes with lack of hygiene. Single-task benchmarks miss this because they capture only one slice of a model's bias profile. We introduce a hierarchical taxonomy covering 9 bias types, including under-studied axes like caste, linguistic, and geographic bias, operationalized through 7 evaluation tasks that span explicit decision-making to implicit association. Auditing 7 commercial and open-weight LLMs with \textasciitilde45K prompts, we find three systematic patterns. First, bias is task-dependent: models counter stereotyp

SocioEval: A Template-Based Framework for Evaluating Socioeconomic Status Bias in Foundation Models
arXiv:2604.02660v1 Announce Type: new Abstract: As Large Language Models (LLMs) increasingly power decision-making systems across critical domains, understanding and mitigating their biases becomes essential for responsible AI deployment. Although bias assessment frameworks have proliferated for attributes such as race and gender, socioeconomic status bias remains significantly underexplored despite its widespread implications in the real world. We introduce SocioEval, a template-based framework for systematically evaluating socioeconomic bias in foundation models through decision-making tasks. Our hierarchical framework encompasses 8 themes and 18 topics, generating 240 prompts across 6 class-pair combinations. We evaluated 13 frontier LLMs on 3,120 responses using a rigorous three-stage

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!