TeleResilienceBench: Quantifying Resilience for LLM Reasoning in Telecommunications

2026-05-11Machine Learning

Machine LearningSoftware Engineering
AI summary

The authors created a test called TeleResilienceBench to see how well language models can keep reasoning correctly when given partly wrong or incomplete thinking from earlier steps in telecom tasks. They measure this ability using a new score, Correct Flip Rate (CFR), which checks if a model can fix and continue flawed reasoning. Testing eight models, they found even the best one only corrected errors about 29% of the time, and bigger models didn't always do better. They also noticed that current telecom benchmarks focus more on knowing facts than on real reasoning skills.

large language modelstelecommunicationsreasoning resiliencebenchmarkGSMA Open-TelcoCorrect Flip Rate (CFR)Qwen3.5Nemotron-3task accuracynumerical evaluation
Authors
Pranshav Gajjar, Emmanuel Ojo, Vijay K Shah
Abstract
Deploying large language models in telecommunications requires more than task accuracy. In realistic workflows, a model may inherit partially completed reasoning from a prior step, an upstream agent, or its own earlier generation, and must continue that reasoning even when it is already going wrong. We introduce TeleResilienceBench, a benchmark that quantifies this capability, which we term reasoning resilience, across seven telecom sub-domains drawn from the GSMA Open-Telco LLM suite. Instances are constructed by collecting failures from a weak generator model, truncating the flawed reasoning trace at its midpoint, and asking a target model to continue and correct it. We propose the Correct Flip Rate (CFR) as a direct measure of successful recovery and evaluate eight models spanning the Qwen3.5, Gemma4, and Nemotron-3 families. Our results show that even the strongest model achieves a macro-average CFR of only 29.1%, and scale does not reliably improve resilience within families. Nemotron-3-nano 4b outperforms all Qwen3.5 variants including the 27b model and leads the auxiliary TeleMath numerical evaluation at 23.4% CR%, offering the best resilience-to-cost ratio in the set. A difficulty-stratified analysis further reveals that existing telecom benchmark difficulty labels reflect factual specificity rather than reasoning depth, suggesting that current evaluations measure knowledge coverage more than reasoning ability.