Benchmarking Safety Risks of Knowledge-Intensive Reasoning under Malicious Knowledge Editing

2026-05-11Artificial Intelligence

Artificial IntelligenceCryptography and Security
AI summary

The authors created EditRisk-Bench, a tool to test how changing a language model's knowledge can cause safety problems, like spreading false or harmful information. Unlike earlier tests that mostly checked if knowledge updates worked, this tool looks at how these changes affect the model's reasoning and safety. They tested different types of harmful knowledge on many models and found that bad edits can make the model give wrong or unsafe answers without obvious signs. The authors also found factors that affect the risk, such as how much knowledge is changed and how complex the reasoning is.

Large Language ModelsKnowledge EditingSafety RisksMalicious KnowledgeReasoning BehaviorBenchmarkMisinformationBiasEditing StrategiesAttack Effectiveness
Authors
Qinghua Mao, Xi Lin, Jinze Gu, Jun Wu, Siyuan Li, Yuliang Chen
Abstract
Large language models (LLMs) increasingly rely on knowledge editing to support knowledge-intensive reasoning, but this flexibility also introduces critical safety risks: adversaries can inject malicious or misleading knowledge that corrupts downstream reasoning and leads to harmful outcomes. Existing knowledge editing benchmarks primarily focus on editing efficacy and lack a unified framework for systematically evaluating the safety implications of edited knowledge on reasoning behavior. To address this gap, we present EditRisk-Bench, a benchmark for systematically evaluating safety risks of knowledge-intensive reasoning under malicious knowledge editing. Unlike prior benchmarks that mainly emphasize edit success, generalization, and locality, EditRisk-Bench focuses on how injected knowledge affects downstream reasoning behavior and reliability. It integrates diverse malicious scenarios, including misinformation, bias, and safety violations, together with multi-level knowledge-intensive reasoning tasks and representative editing strategies within a unified evaluation framework measuring attack effectiveness, reasoning correctness, and side effects. Extensive experiments on both open-source and closed-source LLMs show that malicious knowledge editing can reliably induce incorrect or unsafe reasoning while largely preserving general capabilities, making such risks difficult to detect. We further identify several key factors influencing these risks, including edit scale, knowledge characteristics, and reasoning complexity. EditRisk-Bench provides an extensible testbed for understanding and mitigating safety risks in knowledge editing for LLMs.