Synthesizing Multi-Agent Harnesses for Vulnerability Discovery

2026-04-22Cryptography and Security

Cryptography and Security
AI summary

The authors explain that groups of AI agents can work together to find security problems in software that humans and other tools missed for a long time. They show that how these agents are organized and communicate—a setup called a harness—greatly affects their success, but most harnesses are manually designed and limited. Their system, AgentFlow, uses a special language and feedback from the software being tested to automatically improve the harness, making the agents work better. They tested AgentFlow on benchmark software and Google Chrome, where it found several new serious security flaws. This shows how automatic coordination and learning can help AI agents find real vulnerabilities.

LLM agentssecurity vulnerabilitiesharnesstyped graph DSLfeedback-driven optimizationfuzzingzero-day vulnerabilitysandbox escapebenchmarkingprompt engineering
Authors
Hanzhi Liu, Chaofan Shou, Xiaonan Liu, Hongbo Wen, Yanju Chen, Ryan Jingyang Fang, Yu Feng
Abstract
LLM agents have begun to find real security vulnerabilities that human auditors and automated fuzzers missed for decades, in source-available targets where the analyst can build and instrument the code. In practice the work is split among several agents, wired together by a harness: the program that fixes which roles exist, how they pass information, which tools each may call, and how retries are coordinated. When the language model is held fixed, changing only the harness can still change success rates by several-fold on public agent benchmarks, yet most harnesses are written by hand; recent harness optimizers each search only a narrow slice of the design space and rely on coarse pass/fail feedback that gives no diagnostic signal about why a trial failed. AgentFlow addresses both limitations with a typed graph DSL whose search space jointly covers agent roles, prompts, tools, communication topology, and coordination protocol, paired with a feedback-driven outer loop that reads runtime signals from the target program itself to diagnose which part of the harness caused the failure and rewrite it accordingly. We evaluate AgentFlow on TerminalBench-2 with Claude Opus 4.6 and on Google Chrome with Kimi K2.5. AgentFlow reaches 84.3% on TerminalBench-2, the highest score in the public leaderboard snapshot we evaluate against, and discovers ten previously unknown zero-day vulnerabilities in Google Chrome, including two Critical sandbox-escape vulnerabilities (CVE-2026-5280 and CVE-2026-6297).