Performative Scenario Optimization
2026-03-31 • Computer Science and Game Theory
Computer Science and Game Theory
AI summaryⓘ
The authors present a new way to solve problems where decisions influence the data they depend on, creating a feedback loop. They define special solutions called performative solutions that act like stable points, and prove these exist using a mathematical theorem. To find these solutions without knowing the full environment, they create a model-free method that alternates between generating data and optimizing decisions. They prove their method reliably converges to the correct solution under reasonable conditions. As a real-world test, they apply their framework to improve AI safety by creating guardrails to prevent language model misuse, showing their solution stabilizes well.
performative optimizationchance constraintsdecision-dependent dataKakutani fixed-point theoremscenario optimizationmodel-free methodsstochastic fixed-point iterationLarge Language ModelsAI safetyadversarial prompts
Authors
Quanyan Zhu, Zhengye Han
Abstract
This paper introduces a performative scenario optimization framework for decision-dependent chance-constrained problems. Unlike classical stochastic optimization, we account for the feedback loop where decisions actively shape the underlying data-generating process. We define performative solutions as self-consistent equilibria and establish their existence using Kakutani's fixed-point theorem. To ensure computational tractability without requiring an explicit model of the environment, we propose a model-free, scenario-based approximation that alternates between data generation and optimization. Under mild regularity conditions, we prove that a stochastic fixed-point iteration, equipped with a logarithmic sample size schedule, converges almost surely to the unique performative solution. The effectiveness of the proposed framework is demonstrated through an emerging AI safety application: deploying performative guardrails against Large Language Model (LLM) jailbreaks. Numerical results confirm the co-evolution and convergence of the guardrail classifier and the induced adversarial prompt distribution to a stable equilibrium.