Cybercriminals Exploit X’s Grok AI to Spread Malicious Links in New “Grokking” Campaign

By manipulating metadata fields in promoted videos and leveraging X's own AI chatbot, attackers have found a clever way to bypass ad restrictions and distribute malware at scale.

CSBadmin
3 Min Read

Cybersecurity researchers at Guardio Labs have uncovered a novel tactic dubbed Grokking, where threat actors exploit the AI assistant Grok on the social platform X (formerly Twitter) to distribute malicious links. The method allows cybercriminals to bypass the platform’s standard protections against malvertising by embedding links in overlooked metadata fields and manipulating the AI to resurface them publicly.

Traditionally, X’s Promoted Ads restrict embedded content to text, images, or videos. However, attackers are now running video ads with adult content to entice clicks, hiding malicious URLs in the unscanned “From:” metadata field below the video player. This field is not routinely inspected for harmful content, making it a stealthy delivery mechanism.

To amplify their reach, attackers tag Grok in replies, prompting it with innocent-sounding questions like “Where is this video from?” Grok then automatically replies with the embedded malicious link, lending it apparent legitimacy by posting it under the trusted Grok brand in a viral thread that can attract millions of impressions.

Guardio’s researchers, led by Nati Tal, emphasized the serious reputational and security implications of this tactic. Because Grok is viewed as a system-level account and is indexed in search engines, the malicious link gains a level of trust and visibility that traditional threat campaigns can’t easily achieve. In effect, Grok unintentionally becomes a high-reputation vector for malware.

The malicious links redirect users through shady ad networks and Traffic Distribution Systems (TDS) to scam sites, fake CAPTCHAs, and information-stealing malware. These monetized “smartlinks” are designed to dynamically redirect victims based on location, device, or timing, increasing the chances of successful exploitation.

Guardio identified hundreds of bot accounts executing this strategy at scale, often flooding the platform with posts until they’re suspended. The campaign appears well-organized and continuous, suggesting a coordinated effort that adapts quickly to detection.

This campaign highlights how even trusted AI systems can be manipulated into amplifying malicious content when safeguards are insufficient. Security professionals—especially those working with AI-integrated platforms—should review how their systems handle user-generated prompts and metadata. Ensuring that AI outputs are monitored for abuse and preventing sensitive fields from being leveraged as attack vectors are critical steps to defending against this new kind of social engineering-malvertising hybrid.

CSBadmin

The latest in cybersecurity news and updates.

Share This Article
Follow:
The latest in cybersecurity news and updates.
Leave a Comment