CliffSearch: Structured Agentic Co-Evolution over Theory and Code for Scientific Algorithm Discovery
2026-04-01 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors developed CliffSearch, a new system that helps find scientific algorithms by mimicking evolution using large language models (LLMs) as agents. Unlike other methods that mainly focus on code, their approach checks both the theory and code for correctness and originality before deciding what to keep. They also separate mutations into two types: one for exploring new ideas from related fields and another for fixing errors based on feedback. This method was tested on different algorithm problems and helped produce interpretable, well-checked solutions instead of just many trial versions.
evolutionary algorithmslarge language models (LLMs)algorithm discoverycorrectness gatingoriginalitymutation operatorstransformersoptimizer discoverybenchmark metricsnanoGPT
Authors
Youssef Mroueh, Carlos Fonseca, Brian Belgodere, David Cox
Abstract
Scientific algorithm discovery is iterative: hypotheses are proposed, implemented, stress-tested, and revised. Current LLM-guided search systems accelerate proposal generation, but often under-represent scientific structure by optimizing code-only artifacts with weak correctness/originality gating. We present CliffSearch, an agentic evolutionary framework in which the core evolution operators (pair selection, crossover, mutation, and review) are implemented as LLM agents, and the loop is designed around three principles: (1) each node is a structured scientific artifact, instantiated in either theory+code or code_only mode, (2) reviewer judgments of correctness and originality are first-class selection gates alongside optimization of the benchmark metric of interest, and (3) mutation is split into exploration and correction pathways with distinct objectives. Exploration mutation imports ideas from adjacent scientific domains to increase novelty, while correction mutation performs targeted evidence-guided repair using reviewer signals over theory, code, benchmark results, and runtime errors. We illustrate the framework on three benchmark-grounded studies: transformer hyper-connection evolution, optimizer discovery on a fixed nanoGPT stack, and a smaller native-optimizer ablation. Across these settings, the same loop supports explicit metric direction, reproducible persistence, and reviewer-gated comparison of discoveries under controlled search conditions. The result is a discovery workflow that prioritizes scientific interpretability and correctness while optimizing task metrics under controlled novelty constraints, rather than maximizing candidate throughput alone. Full run artifacts, interactive visualizations, and exported best nodes for the reported studies are available at https://cliffsearch.ai .