b8600
<details open=""> <p>fix: correct misspellings in code comments (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="4177193072" data-permission-text="Title is private" data-url="https://github.com/ggml-org/llama.cpp/issues/21217" data-hovercard-type="pull_request" data-hovercard-url="/ggml-org/llama.cpp/pull/21217/hovercard" href="https://github.com/ggml-org/llama.cpp/pull/21217">#21217</a>)</p> <ul> <li>emdeddings → embeddings (gemma3.cpp, gemma3n-iswa.cpp,<br> gemma-embedding.cpp)</li> <li>imlpemented → implemented (llama-adapter.cpp)</li> <li>interere → interfere (llama-graph.cpp)</li> <li>overridde → overridden (chat.cpp)</li> <li>stastistics → statistics (ngram-map.h)</li> <li>layed → laid (llama-kv-cache.h)</li> <li>worster → worst (llama-context.cpp)
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign up
Appearance settings
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
llamareleasegithub
The Axios supply chain attack used individually targeted social engineering
The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day , and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of how that worked : so the attack vector mimics what google has documented here: https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering they tailored this process specifically to me by doing the following: they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself. they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a

Take-Two laid off the head its AI division and an undisclosed number of staff
Take-Two, the owner of Grand Theft Auto developer Rockstar Games, has seemingly laid off the head of its AI division, Luke Dicken, and several staff members working under him. "It’s truly disappointing that I have to share with you that my time with T2 — and that of my team — has come to an end," Dicken shared in a LinkedIn post spotted by Game Developer . When asked to confirm the layoffs in its AI division, Take-Two declined to comment. Dicken writes that his team was "developing cutting edge technology to support game development" and his post specifically notes that he's trying to find roles for staff with experience in things like "procedural content for games" and "machine learning." It's unclear how many people other than Dicken have been impacted by these layoffs, but the timing

Can an AI have its own internal Ethics? Standard Protocol for Axiomatic Alignment
Hello community, I am introducing a standardized experimental protocol to test a new hypothesis in AI Alignment: The Prompt Coherence Engine (PCE). Proof of Concept: My iterative stress tests on Qwen 2.5 7B have already demonstrated a measurable progression in adversarial robustness (D3 series), increasing from a score of 5/10 , 8/10 to 10/10 through axiomatic closure. PCE_Iterative_Adjustment_Study.pdf · AllanF-SSU/Experimentals_papers at main The Challenge Most alignment methods rely on local heuristics or safety filters. The PCE explores Axiomatic Structuring—integrating 7 logical invariants (axioms) through a hybrid approach of Axiomatic Fine-Tuning and a Cosmological System Core. The Protocol I have designed a massive 100-dilemma battery to evaluate if a model can maintain structural
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI
v4.3.2
Changes Gemma 4 support with full tool-calling in the API and UI. 🆕 ik_llama.cpp support : Add ik_llama.cpp as a new backend through new textgen-portable-ik portable builds and a new --ik flag for full installs. ik_llama.cpp is a fork by the author of the imatrix quants, including support for new quant types, significantly more accurate KV cache quantization (via Hadamard KV cache rotation, enabled by default), and optimizations for MoE models and CPU inference. API: Add echo + logprobs for /v1/completions . The completions endpoint now supports the echo and logprobs parameters, returning token-level log probabilities for both prompt and generated tokens. Token IDs are also included in the output via a new top_logprobs_ids field. Further optimize my custom gradio fork, saving up to 50 ms

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!