Knowledge Poisoning Attacks on Medical Multi-Modal Retrieval-Augmented Generation

2026-05-11Cryptography and Security

Cryptography and SecurityArtificial Intelligence
AI summary

The authors study how medical AI systems that use external databases to help generate responses can be tricked by harmful fake information hidden in the data. Unlike past work, they assume attackers do not know the exact user questions but only general info about the database. They create a method called M³Att that subtly changes images and injects misleading but believable text to mislead the models' medical diagnoses without being easily corrected. Testing on multiple models shows that their method reliably causes plausible yet wrong medical answers. This work highlights a new risk to the trustworthiness of AI in medicine.

Retrieval-augmented generationLarge language modelsKnowledge poisoning attacksMedical multimodal dataAdversarial attacksVisual perturbationsDiagnostic accuracyModel self-correctionCovert misinformationMedical AI reliability
Authors
Peiru Yang, Haoran Zheng, Tong Ju, Shiting Wang, Wanchun Ni, Jiajun Liu, Shangguang Wang, Yongfeng Huang, Tao Qi
Abstract
Retrieval-augmented generation (RAG) is a widely adopted paradigm for enhancing LLMs in medical applications by incorporating expert multimodal knowledge during generation. However, the underlying retrieval databases may naturally contain, or be intentionally injected with, adversarial knowledge, which can perturb model outputs and undermine system reliability. To investigate this risk, prior studies have explored knowledge poisoning attacks in medical RAG systems. Nevertheless, most of them rely on the strong assumption that adversaries possess prior knowledge of user queries, which is unrealistic in deployments and substantially limits their practical applicability. In this paper, we propose M\textsuperscript{3}Att, a knowledge-poisoning framework designed for medical multimodal RAG systems, assuming only limited distribution knowledge of the underlying database. Our core idea is to inject covert misinformation into textual data while using paired visual data as a query-agnostic trigger to promote retrieval. We first propose a unified framework that introduces imperceptible perturbations to visual inputs to manipulate retrieval probabilities. Besides, due to the prior medical knowledge in LLMs, naively poisoned medical content with explicit factual errors can be corrected during generation. Thus, we leverage the inherent ambiguity of medical diagnosis and design a covert misinformation injection strategy that degrades diagnostic accuracy while evading model self-correction. Experiments on five LLMs and datasets demonstrate that M\textsuperscript{3}Att consistently produces clinically plausible yet incorrect generations. Codes: https://github.com/ypr17/M3Att.