Mediocrity is the key for LLM as a Judge Anchor Selection

2026-03-17Computation and Language

Computation and Language
AI summary

The authors looked at how choosing a reference model, called an anchor, affects the way we compare AI models using a method called 'LLM-as-a-judge.' They tested 22 different anchors and found that picking a bad anchor, like the very best or worst model, can make the rankings less like what humans think. They also showed that the choice of anchor matters as much as which AI judge model is used. Finally, they suggest ways to pick better anchors and how big evaluations need to be to get trustworthy results.

LLM-as-a-judgeanchor modelpairwise comparisonbenchmark evaluationArena-Hard datasetmodel rankingevaluation reliabilitypower analysishuman correlation
Authors
Shachar Don-Yehiya, Asaf Yehudai, Leshem Choshen, Omri Abend
Abstract
The ``LLM-as-a-judge'' paradigm has become a standard method for evaluating open-ended generation. To address the quadratic scalability costs of pairwise comparisons, popular benchmarks like Arena-Hard and AlpacaEval compare all models against a single anchor. However, despite its widespread use, the impact of anchor selection on the reliability of the results remains largely unexplored. In this work, we systematically investigate the effect of anchor selection by evaluating 22 different anchors on the Arena-Hard-v2.0 dataset. We find that the choice of anchor is critical: a poor anchor can dramatically reduce correlation with human rankings. We identify that common anchor choices (best-performing and worst-performing models) make poor anchors. Because these extreme anchors are consistently better or worse than all other models, they are seldom indicative of the relative ranking of the models. We further quantify the effect size of anchor selection, showing it is comparable to the selection of a judge model. We conclude with actionable recommendations. First, we conduct a power analysis, and compute sufficient benchmark sizes for anchor-based evaluation, finding that standard benchmark sizes are insufficient for pairwise evaluation and fail to distinguish between competitive models reliably. Second, we provide guidelines for selecting informative anchors to ensure reliable and efficient evaluation practices.