Towards Automated Pentesting with Large Language Models
2026-04-13 • Cryptography and Security
Cryptography and Security
AI summaryⓘ
The authors created RedShell, a tool that uses specially trained large language models to help ethical hackers write PowerShell code that tests security weaknesses in Windows systems. They trained RedShell on real malicious code and improved it with extra examples, making it good at writing valid and relevant test scripts. Their tests show RedShell’s code is very similar to expert-written examples and runs well in real-like environments. This work explores how AI can aid in automated security testing while keeping user privacy and hardware needs low.
Large Language ModelsPowerShellPentestingMalicious CodeMicrosoft Windows VulnerabilitiesSyntactic ValiditySemantic AlignmentEdit DistanceGenerative AIPrivacy Preservation
Authors
Ricardo Bessa, Rui Claro, João Trindade, João Lourenço
Abstract
Large Language Models (LLMs) are redefining offensive cybersecurity by allowing the generation of harmful machine code with minimal human intervention. While attackers take advantage of dark LLMs such as XXXGPT and WolfGPT to produce malicious code, ethical hackers can follow similar approaches to automate traditional pentesting workflows. In this work, we present RedShell, a privacy-preserving, hardware-efficient framework that leverages fine-tuned LLMs to assist pentesters in generating offensive PowerShell code targeting Microsoft Windows vulnerabilities. RedShell was trained on a malicious PowerShell dataset from the literature, which we further enhanced with manually curated code samples. Experiments show that our framework achieves over 90% syntactic validity in generated samples and strong semantic alignment with reference pentesting snippets, outperforming state-of-the-art counterparts in distance metrics such as edit distance (above 50% average code similarity). Additionally, functional experiments emphasize the execution reliability of the snippets produced by RedShell in a testing scenario that mirrors real-world settings. This work sheds light on the state-of-the-art research in the field of Generative AI applied to malicious code generation and automated testing, acknowledging the potential benefits that LLMs hold within controlled environments such as pentesting.