Security researchers at ESET have uncovered PromptLock, the first known AI-powered ransomware that leverages a large language model (LLM) to dynamically generate malicious Lua scripts. Written in Golang, PromptLock targets Windows, macOS, and Linux systems using OpenAI’s gpt-oss:20b
model via the Ollama API. The LLM is accessed through a remote server connected via a proxy tunnel, allowing the malware to stay modular and flexible across environments.
PromptLock uses hard-coded prompts to instruct the language model to produce scripts for key ransomware functions, including filesystem enumeration, inspection of target files, exfiltration, and encryption. Notably, it relies on the SPECK 128-bit encryption algorithm—a lightweight cipher designed for constrained devices, not commonly used in ransomware due to its weakness. Although the malware includes placeholder functionality for data destruction, this feature hasn’t yet been implemented.

Malicious Lua scripts generated on the fly. Source: x.com/ESETresearch/status/1960365364300087724.
ESET clarified that PromptLock hasn’t been seen in real-world attacks and was discovered via VirusTotal, suggesting that it is likely a proof-of-concept or experimental tool, not yet weaponized. Supporting this theory are clues like the use of a Bitcoin address linked to Satoshi Nakamoto and an undeveloped data destruction module. After the publication, a security researcher even claimed ownership, stating that the tool had been leaked.
Still, PromptLock is a stark demonstration of how LLMs can be integrated into malware workflows, enabling cross-platform execution, stealth, and real-time code generation. It follows the July discovery of LameHug, another LLM-driven malware attributed to APT28 that uses Hugging Face and Alibaba’s Qwen model to create shell commands dynamically. These developments highlight the growing fusion of generative AI with offensive cyber capabilities.
While PromptLock isn’t active in the wild, it offers a chilling preview of what’s possible when threat actors pair AI models with malware frameworks. As generative models become more accessible, defenders must prepare for a future where malicious code isn’t just written—it’s generated on demand. Organizations should expand their threat detection strategies to include dynamic script generation, API misuse, and unusual outbound connections to AI endpoints, reinforcing the need for behavioral detection over signature-based defense.