Alibaba's Qwen launches new flagship LLM with Qwen 3.6-Plus - Constellation Research
Alibaba's Qwen launches new flagship LLM with Qwen 3.6-Plus Constellation Research
Could not retrieve the full article text.
Read on Google News: LLM →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
launchresearchDo Phone-Use Agents Respect Your Privacy?
We study whether phone-use agents respect privacy while completing benign mobile tasks. This question has remained hard to answer because privacy-compliant behavior is not operationalized for phone-use agents, and ordinary apps do not reveal exactly what data agents type into which form entries during execution. To make this question measurable, we introduce MyPhoneBench, a verifiable evaluation framework for privacy behavior in mobile agents. We operationalize privacy-respecting phone use as pe... (3 upvotes on HuggingFace)
Friends and Grandmothers in Silico: Localizing Entity Cells in Language Models
Entity-centric factual question answering involves localized MLP neurons that can be causally intervened to recover entity-consistent predictions, showing robustness to various linguistic variations but with limited universality across all entities. (0 upvotes on HuggingFace)
Apriel-Reasoner: RL Post-Training for General-Purpose and Efficient Reasoning
Apriel-Reasoner is a 15B-parameter language model trained with reproducible multi-domain reinforcement learning to improve reasoning efficiency and accuracy across diverse tasks while reducing inference costs. (1 upvotes on HuggingFace)
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Conversational Successes and Breakdowns in Everyday Smart Glasses Use
arXiv:2602.22340v2 Announce Type: replace Abstract: Non-Display Smart Glasses hold the potential to support everyday activities by combining continuous environmental sensing with voice-only interaction powered by large language models (LLMs). Understanding how conversational successes and breakdowns arise in everyday contexts can better inform the design of future voice-only interfaces. To investigate this, we conducted a month-long collaborative autoethnography (n=2) to identify patterns of successes and breakdowns when using such devices. We then compare these patterns with prior findings on voice-only interactions to highlight the unique affordances and opportunities offered by non-display smart glasses.

J-CHAT: Japanese Large-scale Spoken Dialogue Corpus for Spoken Dialogue Language Modeling
arXiv:2407.15828v2 Announce Type: replace-cross Abstract: Spoken dialogue is essential for human-AI interactions, providing expressive capabilities beyond text. Developing effective spoken dialogue systems (SDSs) requires large-scale, high-quality, and diverse spoken dialogue corpora. However, existing datasets are often limited in size, spontaneity, or linguistic coherence. To address these limitations, we introduce J-CHAT, a 76,000-hour open-source Japanese spoken dialogue corpus. Constructed using an automated, language-independent methodology, J-CHAT ensures acoustic cleanliness, diversity, and natural spontaneity. The corpus is built from YouTube and podcast data, with extensive filtering and denoising to enhance quality. Experimental results with generative spoken dialogue language m



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!