b8640
tests : add unit test coverage for llama_tensor_get_type ( #20112 ) Add unit test coverage for llama_tensor_get_type Fix merge conflicts, add more schemas clang formatter changes Trailing whitespace Update name Start rebase Updating files with upstream changes prior to rebase Changes needed from rebase Update attn_qkv schema, change throw behaviour Fix merge conflicts White space Update with latest changes to state counters Revert accidental personal CLAUDE.md changes Change quotation mark Reuse metadata.name since we have it Move test-only stuff out of llama-quant.cpp Hide the regex functionality back in llama-quant.cpp, use a unique pointer to a new struct 'compiled_tensor_type_patterns' which contains the patterns cont : inital deslop guidelines Cleanup based on review comments Continue
tests : add unit test coverage for llama_tensor_get_type (#20112)
-
Add unit test coverage for llama_tensor_get_type
-
Fix merge conflicts, add more schemas
-
clang formatter changes
-
Trailing whitespace
-
Update name
-
Start rebase
-
Updating files with upstream changes prior to rebase
-
Changes needed from rebase
-
Update attn_qkv schema, change throw behaviour
-
Fix merge conflicts
-
White space
-
Update with latest changes to state counters
-
Revert accidental personal CLAUDE.md changes
-
Change quotation mark
-
Reuse metadata.name since we have it
-
Move test-only stuff out of llama-quant.cpp
-
Hide the regex functionality back in llama-quant.cpp, use a unique pointer to a new struct 'compiled_tensor_type_patterns' which contains the patterns
-
cont : inital deslop guidelines
-
Cleanup based on review comments
-
Continue cleanup
-
Small cleanup
-
Manually set proper ordering of tensors, mostly applies to gemma
-
Formatting
-
Update tests/test-quant-type-selection.cpp
Co-authored-by: Sigbjørn Skjæret [email protected]
- Fix merge conflicts
Co-authored-by: Georgi Gerganov [email protected] Co-authored-by: Sigbjørn Skjæret [email protected]
macOS/iOS:
-
macOS Apple Silicon (arm64)
-
macOS Intel (x64)
-
iOS XCFramework
Linux:
-
Ubuntu x64 (CPU)
-
Ubuntu arm64 (CPU)
-
Ubuntu s390x (CPU)
-
Ubuntu x64 (Vulkan)
-
Ubuntu arm64 (Vulkan)
-
Ubuntu x64 (ROCm 7.2)
-
Ubuntu x64 (OpenVINO)
Windows:
-
Windows x64 (CPU)
-
Windows arm64 (CPU)
-
Windows x64 (CUDA 12) - CUDA 12.4 DLLs
-
Windows x64 (CUDA 13) - CUDA 13.1 DLLs
-
Windows x64 (Vulkan)
-
Windows x64 (SYCL)
-
Windows x64 (HIP)
openEuler:
-
openEuler x86 (310p)
-
openEuler x86 (910b, ACL Graph)
-
openEuler aarch64 (310p)
-
openEuler aarch64 (910b, ACL Graph)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudellamaupdate
I Shipped an AI SaaS in 4 Hours. Here Is the Exact Stack.
I Shipped an AI SaaS in 4 Hours. Here Is the Exact Stack. Every AI SaaS project starts the same way. You have a great idea. You open your editor. Then you spend three weeks on auth, Stripe integration, a dashboard, and a landing page — none of which is your actual product. I built a kit that eliminates that. Here is the exact stack and what each piece does. The Stack next.js 14 (App Router) tailwind css stripe billing nextauth openai / claude api routes prisma + postgresql What Comes Pre-Wired Authentication (NextAuth) // app/api/auth/[...nextauth]/route.ts import NextAuth from " next-auth " import { authOptions } from " @/lib/auth " const handler = NextAuth ( authOptions ) export { handler as GET , handler as POST } Google OAuth, GitHub OAuth, and email/password — all configured. Sessions

Full Stack Developer Roadmap 2026: The Complete Guide from Beginner to Pro 🚀
Have a Look at My Portfolio Introduction: Why Full Stack Development Is Still the Best Bet in 2026 Let me be straight with you. When I started learning web development years ago, I had seventeen browser tabs open, three half-finished Udemy courses, and absolutely no idea what to actually learn first. Sound familiar? The good news: in 2026, the path is clearer than ever — if you know where to look. Full stack development remains one of the most in-demand, highest-paying, and genuinely exciting career paths in tech. Despite all the noise about AI replacing developers, companies continue to hire full stack developers because AI can assist coding — but it cannot design, architect, and scale real-world applications independently. What has changed is the stack itself. In 2026, being a full stack

10 Claude Code Skills That Replaced My Boilerplate Folders
10 Claude Code Skills That Replaced My Boilerplate Folders I used to keep a folder of boilerplate code. Auth templates. Stripe integration files. Docker configs. I do not do that anymore. Here are the 10 Claude Code skills that replaced them. What Is a Claude Code Skill? A skill is a markdown file Claude Code reads before writing code. It gives Claude full context about your preferences, patterns, and requirements — so the output is production-ready, not generic. You invoke a skill with a slash command: /auth → full authentication system /pay → Stripe billing setup Claude reads the skill, asks clarifying questions, then outputs complete implementations. The 10 Skills 1. /auth — Authentication System Asks: OAuth providers? Session or JWT? Role-based access needed? Outputs: Complete auth imp
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Goal-Conditioned Neural ODEs with Guaranteed Safety and Stability for Learning-Based All-Pairs Motion Planning
arXiv:2604.02821v1 Announce Type: new Abstract: This paper presents a learning-based approach for all-pairs motion planning, where the initial and goal states are allowed to be arbitrary points in a safe set. We construct smooth goal-conditioned neural ordinary differential equations (neural ODEs) via bi-Lipschitz diffeomorphisms. Theoretical results show that the proposed model can provide guarantees of global exponential stability and safety (safe set forward invariance) regardless of goal location. Moreover, explicit bounds on convergence rate, tracking error, and vector field magnitude are established. Our approach admits a tractable learning implementation using bi-Lipschitz neural networks and can incorporate demonstration data. We illustrate the effectiveness of the proposed method

Learning Structured Robot Policies from Vision-Language Models via Synthetic Neuro-Symbolic Supervision
arXiv:2604.02812v1 Announce Type: new Abstract: Vision-language models (VLMs) have recently demonstrated strong capabilities in mapping multimodal observations to robot behaviors. However, most current approaches rely on end-to-end visuomotor policies that remain opaque and difficult to analyze, limiting their use in safety-critical robotic applications. In contrast, classical robotic systems often rely on structured policy representations that provide interpretability, modularity, and reactive execution. This work investigates how foundation models can be specialized to generate structured robot policies grounded in multimodal perception, bridging high-dimensional learning and symbolic control. We propose a neuro-symbolic approach in which a VLM synthesizes executable Behavior Tree polici


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!