Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessDySCo: Dynamic Semantic Compression for Effective Long-term Time Series ForecastingarXivUQ-SHRED: uncertainty quantification of shallow recurrent decoder networks for sparse sensing via engressionarXivAn Online Machine Learning Multi-resolution Optimization Framework for Energy System Design Limit of Performance AnalysisarXivMalliavin Calculus for Counterfactual Gradient Estimation in Adaptive Inverse Reinforcement LearningarXivEfficient and Principled Scientific Discovery through Bayesian Optimization: A TutorialarXivMassively Parallel Exact Inference for Hawkes ProcessesarXivModel Merging via Data-Free Covariance EstimationarXivDetecting Complex Money Laundering Patterns with Incremental and Distributed Graph ModelingarXivForecasting Supply Chain Disruptions with Foresight LearningarXivSven: Singular Value Descent as a Computationally Efficient Natural Gradient MethodarXivSECURE: Stable Early Collision Understanding via Robust Embeddings in Autonomous DrivingarXivJetPrism: diagnosing convergence for generative simulation and inverse problems in nuclear physicsarXivBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessDySCo: Dynamic Semantic Compression for Effective Long-term Time Series ForecastingarXivUQ-SHRED: uncertainty quantification of shallow recurrent decoder networks for sparse sensing via engressionarXivAn Online Machine Learning Multi-resolution Optimization Framework for Energy System Design Limit of Performance AnalysisarXivMalliavin Calculus for Counterfactual Gradient Estimation in Adaptive Inverse Reinforcement LearningarXivEfficient and Principled Scientific Discovery through Bayesian Optimization: A TutorialarXivMassively Parallel Exact Inference for Hawkes ProcessesarXivModel Merging via Data-Free Covariance EstimationarXivDetecting Complex Money Laundering Patterns with Incremental and Distributed Graph ModelingarXivForecasting Supply Chain Disruptions with Foresight LearningarXivSven: Singular Value Descent as a Computationally Efficient Natural Gradient MethodarXivSECURE: Stable Early Collision Understanding via Robust Embeddings in Autonomous DrivingarXivJetPrism: diagnosing convergence for generative simulation and inverse problems in nuclear physicsarXiv
AI NEWS HUBbyEIGENVECTOREigenvector

Offline Constrained RLHF with Multiple Preference Oracles

arXiv cs.LGby Brenden Latham, Mehrdad MoharramiApril 2, 20261 min read0 views
Source Quiz

arXiv:2604.00200v1 Announce Type: new Abstract: We study offline constrained reinforcement learning from human feedback with multiple preference oracles. Motivated by applications that trade off performance with safety or fairness, we aim to maximize target population utility subject to a minimum protected group welfare constraint. From pairwise comparisons collected under a reference policy, we estimate oracle-specific rewards via maximum likelihood and analyze how statistical uncertainty propagates through the dual program. We cast the constrained objective as a KL-regularized Lagrangian whose primal optimizer is a Gibbs policy, reducing learning to a convex dual problem. We propose a dual-only algorithm that ensures high-probability constraint satisfaction and provide the first finite-s

View PDF HTML (experimental)

Abstract:We study offline constrained reinforcement learning from human feedback with multiple preference oracles. Motivated by applications that trade off performance with safety or fairness, we aim to maximize target population utility subject to a minimum protected group welfare constraint. From pairwise comparisons collected under a reference policy, we estimate oracle-specific rewards via maximum likelihood and analyze how statistical uncertainty propagates through the dual program. We cast the constrained objective as a KL-regularized Lagrangian whose primal optimizer is a Gibbs policy, reducing learning to a convex dual problem. We propose a dual-only algorithm that ensures high-probability constraint satisfaction and provide the first finite-sample performance guarantees for offline constrained preference learning. Finally, we extend our theoretical analysis to accommodate multiple constraints and general f-divergence regularization.

Subjects:

Machine Learning (cs.LG)

Cite as: arXiv:2604.00200 [cs.LG]

(or arXiv:2604.00200v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2604.00200

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Brenden Latham [view email] [v1] Tue, 31 Mar 2026 20:06:34 UTC (5,215 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

announceapplicationanalysis

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Offline Con…announceapplicationanalysisstudypolicysafetyarXiv cs.LG

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 326 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Research Papers