Global Optimality for Constrained Exploration via Penalty Regularization

2026-04-30Machine Learning

Machine Learning
AI summary

The authors address a tough problem in reinforcement learning where agents need to explore safely and efficiently under certain limits. They propose a new method called Policy Gradient Penalty (PGP) that uses a clever penalty system to handle these constraints without losing important mathematical properties. Unlike previous work, their method guarantees that the final policy is both nearly optimal and respects the constraints. They prove this rigorously and show their approach works well on simple grid tasks and more complex continuous control problems.

reinforcement learningentropy maximizationpolicy gradientoccupancy measureconstrained optimizationpenalty methodsBellman equationnon-convex optimizationglobal convergencecontinuous control
Authors
Florian Wolf, Ilyas Fatkhullin, Niao He
Abstract
Efficient exploration is a central problem in reinforcement learning and is often formalized as maximizing the entropy of the state-action occupancy measure. While unconstrained maximum-entropy exploration is relatively well understood, real-world exploration is often constrained by safety, resource, or imitation requirements. This constrained setting is particularly challenging because entropy maximization lacks additive structure, rendering Bellman-equation-based methods inapplicable. Moreover, scalable approaches require policy parameterization, inducing non-convexity in both the objective and the constraints. To our knowledge, the only prior model-free policy-gradient approach for this setting under general policy parameterization is due to Ying et al. (2025). Unfortunately, their guarantees are limited to weak regret and ergodic averages, which do not imply that the final output is a single deployable policy that is near-optimal and nearly feasible. In this work we take a different approach to this problem, and propose Policy Gradient Penalty (PGP) method, a single-loop policy-space method that enforces general convex occupancy-measure constraints via quadratic-penalty regularization. PGP constructs pseudo-rewards that yield gradient estimates of the penalized objective, subsequently exploiting the classical Policy Gradient Theorem. We further establish the regularity of the penalized objective, providing the smoothness properties needed to justify the convergence of PGP. Leveraging hidden convexity and strong duality, we then establish global last-iterate convergence guarantees, attaining an $ε$-optimal constrained entropy value with $ε$ bounded constraint violation despite policy-induced non-convexity. We validate PGP through ablations on a grid-world benchmark and further demonstrate scalability on two challenging continuous-control tasks.