AVISE: Framework for Evaluating the Security of AI Systems

2026-04-22Cryptography and Security

Cryptography and SecurityArtificial IntelligenceComputation and Language
AI summary

The authors created AVISE, a flexible open-source tool to find weaknesses and test the safety of AI systems. They made a special test called the Security Evaluation Test (SET) using an advanced attack technique to see if language AI models can be tricked or 'jailbroken.' Their test works well, correctly identifying vulnerabilities with high accuracy. Using SET, they checked nine recent language models and found all had some security issues. This work helps others safely check and improve AI security in a clear and repeatable way.

AI securityvulnerabilitieslanguage modelsjailbreakingRed Queen attackSecurity Evaluation TestAdversarial attackopen-source frameworkevaluation metricsautomated testing
Authors
Mikko Lempinen, Joni Kemppainen, Niklas Raesalmi
Abstract
As artificial intelligence (AI) systems are increasingly deployed across critical domains, their security vulnerabilities pose growing risks of high-profile exploits and consequential system failures. Yet systematic approaches to evaluating AI security remain underdeveloped. In this paper, we introduce AVISE (AI Vulnerability Identification and Security Evaluation), a modular open-source framework for identifying vulnerabilities in and evaluating the security of AI systems and models. As a demonstration of the framework, we extend the theory-of-mind-based multi-turn Red Queen attack into an Adversarial Language Model (ALM) augmented attack and develop an automated Security Evaluation Test (SET) for discovering jailbreak vulnerabilities in language models. The SET comprises 25 test cases and an Evaluation Language Model (ELM) that determines whether each test case was able to jailbreak the target model, achieving 92% accuracy, an F1-score of 0.91, and a Matthews correlation coefficient of 0.83. We evaluate nine recently released language models of diverse sizes with the SET and find that all are vulnerable to the augmented Red Queen attack to varying degrees. AVISE provides researchers and industry practitioners with an extensible foundation for developing and deploying automated SETs, offering a concrete step toward more rigorous and reproducible AI security evaluation.