ML Safety Newsletter #1
ICLR Safety Paper Roundup
Welcome to the 1st issue of the ML Safety Newsletter. In this edition, we cover:
- various safety papers submitted to ICLR
- results showing that discrete representations can improve robustness
- a benchmark which shows larger models are more likely to repeat misinformation
- a benchmark for detecting when models are gaming proxies
- ... and much more.
Overview of the proposed Vision Transformer that uses discrete representations. The pixel embeddings (orange) are combined with discrete embedded tokens (pink) to create the input to the Vision Transformer.
There is much interest in the robustness of Vision Transformers, as they intrinsically scale better than ResNets in the face of unforeseen inputs and distribution shifts. This paper further enhances the robustness of Vision Transformers by augmenting the input with discrete tokens produced by a vector-quantized encoder. Why this works so well is unclear, but on datasets unlike the training distribution, their model achieves marked improvements. For example, when their model is trained on ImageNet and tested on ImageNet-Rendition (a dataset of cartoons, origami, paintings, toys, etc.), the model accuracy increases from 33.0% to 44.8%.
Paper
Improving test-time adaptation to distribution shift using data augmentation.
Certifying robustness to adversarial patches.
Augmenting data by mixing discrete cosine transform image encodings.
Teaching models to reject adversarial examples when they are unsure of the correct class.
Models trained to predict the next token are incentivized to repeat common misconceptions.
A new benchmark shows that GPT-3 imitates human misconceptions. In fact, larger models more frequently repeat misconceptions, so simply training more capable models may make the problem worse. For example, GPT-J with 6 billion parameters is 17% worse on this benchmark than a model with 0.125 billion parameters. This demonstrates that simple objectives can inadvertently incentivize models to be misaligned and repeat misinformation. To make models outputs truthful, we will need to find ways to counteract this new failure mode.
Paper
An expanded report towards building truthful and honest models.
Using an ensemble of one-class classifiers to create an out-of-distribution detector.
Provable performance guarantees for out-of-distribution detection.
Synthesizing outliers is becoming increasingly useful for detecting real anomalies.
As networks become larger, they can more aggressively optimize proxies and reduce performance of the true objective.
Real-world constraints often require implementing rough proxies instead of our true objectives. However, as models become more capable, they can exploit faults in the proxy and undermine performance, a failure mode called proxy gaming. This paper finds that proxy gaming occurs in multiple environments including a traffic control environment, COVID response simulator, Atari Riverraid, and a simulated controller for blood glucose levels. To mitigate proxy gaming, they use anomaly detection to detect models engaging in proxy gaming.
Paper
A paper studying how models may be incentivized to influence users.
Safe exploration in 3D environments.
A thorough analysis of security vulnerabilities generated by Github Copilot.
An ML system for improved decision making.
The NSF has a new call for proposals. Among other topics, they intend to fund Trustworthy AI (which overlaps with many ML Safety topics), AI for Decision Making, and Intelligent Agents for Next-Generation Cybersecurity (the latter two are relevant for External Safety).
No posts
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
safetypaper
LLMs as Idiomatic Decompilers: Recovering High-Level Code from x86-64 Assembly for Dart
arXiv:2604.02278v1 Announce Type: new Abstract: Translating machine code into human-readable high-level languages is an open research problem in reverse engineering. Despite recent advancements in LLM-based decompilation to C, modern languages like Dart and Swift are unexplored. In this paper, we study the use of small specialized LLMs as an idiomatic decompiler for such languages. Additionally, we investigate the augmentation of training data using synthetic same-language examples, and compare it against adding human-written examples using related-language (Swift -> Dart). We apply CODEBLEU to evaluate the decompiled code readability and compile@k to measure the syntax correctness. Our experimental results show that on a 73-function Dart test dataset (representing diverse complexity level

Source Known Identifiers: A Three-Tier Identity System for Distributed Applications
arXiv:2604.00151v1 Announce Type: cross Abstract: Distributed applications need identifiers that satisfy storage efficiency, chronological sortability, origin metadata embedding, zero-lookup verifiability, confidentiality for external consumers, and multi-century addressability. Based on our literature survey, no existing scheme provides all six of these identifier properties within a unified system. This paper introduces Source Known Identifiers (SKIDs), a three-tier identity system that projects a single entity identity across trust boundaries, addressing all six properties. The first tier, Source Known ID (SKID), is a 64-bit signed integer embedding a timestamp with a 250-millisecond precision, application topology, and a per-entity-type sequence counter. It serves as the database prima

Semantic Evolution over Populations for LLM-Guided Automated Program Repair
arXiv:2604.02134v1 Announce Type: new Abstract: Large language models (LLMs) have recently shown strong potential for automated program repair (APR), particularly through iterative refinement that generates and improves candidate patches. However, state-of-the-art iterative refinement LLM-based APR approaches cannot fully address challenges, including maintaining useful diversity among repair hypotheses, identifying semantically related repair families, composing complementary partial fixes, exploiting structured failure information, and escaping structurally flawed search regions. In this paper, we propose a Population-Based Semantic Evolution framework for APR iterative refinement, called EvolRepair, that formulates LLM-based APR as a semantic evolutionary algorithm. EvolRepair reformula
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

LLMs as Idiomatic Decompilers: Recovering High-Level Code from x86-64 Assembly for Dart
arXiv:2604.02278v1 Announce Type: new Abstract: Translating machine code into human-readable high-level languages is an open research problem in reverse engineering. Despite recent advancements in LLM-based decompilation to C, modern languages like Dart and Swift are unexplored. In this paper, we study the use of small specialized LLMs as an idiomatic decompiler for such languages. Additionally, we investigate the augmentation of training data using synthetic same-language examples, and compare it against adding human-written examples using related-language (Swift -> Dart). We apply CODEBLEU to evaluate the decompiled code readability and compile@k to measure the syntax correctness. Our experimental results show that on a 73-function Dart test dataset (representing diverse complexity level

Fuzzing REST APIs in Industry: Necessary Features and Open Problems
arXiv:2604.01759v1 Announce Type: new Abstract: REST APIs are widely used in industry, in all different kinds of domains. An example is Volkswagen AG, a German automobile manufacturer. Established testing approaches for REST APIs are time consuming, and require expertise from professional test engineers. Due to its cost and importance, in the scientific literature several approaches have been proposed to automatically test REST APIs. The open-source, search-based fuzzer EvoMaster is one of such tools proposed in the academic literature. However, how academic prototypes can be integrated in industry and have real impact to software engineering practice requires more investigation. In this paper, we report on our experience in using EvoMaster at Volkswagen AG, as an EvoMaster user from 2023

Triosecuris: Formally Verified Protection Against Speculative Control-Flow Hijacking
arXiv:2601.22978v2 Announce Type: replace-cross Abstract: This paper introduces Triosecuris, a formally verified defense against Spectre BTB, RSB, and PHT that combines CET-style hardware-assisted control-flow integrity with compiler-inserted speculative load hardening (SLH). Triosecuris is based on the novel observation that in the presence of CET-style protection, we can precisely detect BTB misspeculation for indirect calls and RSB misspeculation for returns and set the SLH misspeculation flag. We formalize Triosecuris as a transformation in Rocq and provide a machine-checked proof that it achieves relative security: any transformed program running with speculation leaks no more than what the source program leaks without speculation. This strong security guarantee applies to arbitrary p

Diffusion-Guided Adversarial Perturbation Injection for Generalizable Defense Against Facial Manipulations
arXiv:2604.01635v1 Announce Type: new Abstract: Recent advances in GAN and diffusion models have significantly improved the realism and controllability of facial deepfake manipulation, raising serious concerns regarding privacy, security, and identity misuse. Proactive defenses attempt to counter this threat by injecting adversarial perturbations into images before manipulation takes place. However, existing approaches remain limited in effectiveness due to suboptimal perturbation injection strategies and are typically designed under white-box assumptions, targeting only simple GAN-based attribute editing. These constraints hinder their applicability in practical real-world scenarios. In this paper, we propose AEGIS, the first diffusion-guided paradigm in which the AdvErsarial facial image


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!