Metis: Learning to Jailbreak LLMs via Self-Evolving Metacognitive Policy Optimization

2026-05-11Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors present Metis, a new method to test and find weaknesses in large language models by treating the problem like a game where the attacker learns and adapts during the test. Unlike older methods that rely on fixed rules or random tries, Metis uses a smart feedback system to understand and bypass the model's defenses more effectively. Their tests showed Metis outperforms other methods on many models, including strong, well-protected ones, while using fewer computational resources. The authors also highlight that current safety measures can still be tricked by adaptive attacks like Metis, suggesting a need for smarter defenses.

Large Language ModelsRed TeamingPolicy OptimizationAdversarial POMDPMetacognitive LoopAttack Success RateInference-time OptimizationSafety AlignmentJailbreakingSemantic Gradient
Authors
Huilin Zhou, Jian Zhao, Yilu Zhong, Zhen Liang, Xiuyuan Chen, Yuchen Yuan, Tianle Zhang, Chi Zhang, Lan Zhang, Xuelong Li
Abstract
Red teaming is critical for uncovering vulnerabilities in Large Language Models (LLMs). While automated methods have improved scalability, existing approaches often rely on static heuristics or stochastic search, rendering them brittle against advanced safety alignment. To address this, we introduce Metis, a framework that reformulates jailbreaking as inference-time policy optimization within an adversarial Partially Observable Markov Decision Process (POMDP). Metis employs a self-evolving metacognitive loop to perform causal diagnosis of a target's defense logic and leverages structured feedback as a semantic gradient to refine its policy, offering enhanced interpretability through transparent reasoning traces. Extensive evaluations across 10 diverse models demonstrate that Metis achieves the strongest average Attack Success Rate (ASR) among compared methods at 89.2%, maintaining high efficacy on resilient frontier models (e.g., 76.0% on O1 and 78.0% on GPT-5-chat) where traditional baselines exhibit substantial performance degradation. By replacing redundant exploration with directed optimization, Metis reduces token costs by an average of 8.2x and up to 11.4x. Our analysis reveals that current defenses remain vulnerable to internally-steered, closed-loop reasoning trajectories under the tested settings, highlighting a critical need for next-generation defenses capable of reasoning about safety dynamically during inference.