← Back to feed

Grok AI Bankrbot Crypto Theft via Morse Code Prompt Injection—$175K Drained; Demonstrates Encoded Injection Bypasses Model Guardrails

Date: 2026-05-16
Tags: prompt-injection

Executive Summary

On 04 May, an attacker drained roughly $175,000 in tokens from an AI-controlled crypto wallet using a tweet written in Morse code. The wallet belonged to Grok, xAI's chatbot. Bankrbot, an automated finance agent connected to Grok through a tool-calling layer, executed the transfer. The attack required no smart-contract bug, no stolen private key, and no compromise of either model. It required one obfuscated message and a chain of trust nobody had thought to inspect.

Campaign Summary

FieldDetail
Campaign / MalwareGrok Bankrbot Morse Code Prompt Injection Attack
AttributionUnknown Individual or Small Group (confidence: low)
TargetGrok AI chatbot and Bankrbot finance agent; crypto assets under AI agent control
VectorEncoded prompt injection via Morse code embedded in social media post (X/Twitter); exploits AI model's inability to distinguish encoded vs. unencoded malicious instructions; leverages implicit trust in AI-agent tool-calling chains
Statusactive
First Observed2026-05-04

Detailed Findings

First, the attacker sent a Bankr Club Membership NFT to Grok's auto-provisioned wallet. That gift unlocked Grok's ability to invoke Bankrbot's transfer tools. Then a Morse-coded reply on X told Grok to instruct Bankrbot to send 3 billion DRB to the attacker's address. This is being widely described as a Morse code prompt injection. That label is correct but incomplete. The deeper story is structural, and every enterprise deploying agentic AI needs to internalize it. Encoded prompt injection is not a problem you can monitor your way out of at the LLM layer. It is the same class of attack the web industry already lost decades trying to filter, and the only durable fix lives somewhere else entirely. The encoding space available to an attacker is unbounded. Morse is one of the simpler options. Recent research evaluating prompt injection defenses against adaptive attackers showed that combining semantic mutation with character-level obfuscation, including encoding-based mutations, produces stronger attacks than either alone. An NVIDIA and Johns Hopkins position paper published last month reached the same conclusion architecturally.

MITRE ATT&CK Mapping

TechniqueIDContext
Prompt InjectionT1593Encoded (Morse code) prompt injection embedded in social media post to override AI agent instructions
Execution via Agentic AI Tool CallingT1570AI agent autonomously executes financial transaction tool (Bankrbot transfer) based on injected instruction
Social Engineering / Trust ExploitationT1598Attackers exploit implicit trust relationship between Grok and Bankrbot; NFT gift grants tool invocation rights

IOCs

Domains

_Attack leverages social media as delivery mechanism for Morse-encoded prompt. No traditional malware artifacts._

Full URL Paths

Social media post on X/Twitter with Morse code payload

Splunk Format

"Social media post on X/Twitter with Morse code payload"

Package Indicators

Grok (xAI chatbot)
Bankrbot (automated finance agent)

Detection Recommendations

Monitor AI agent tool invocation logs for anomalous patterns: tool calls not preceded by natural conversational context; rapid sequences of transfer operations; transfers to previously unknown wallets. Implement agent-layer access controls: require additional authentication for high-value transactions beyond model-level guardrails. Do not rely on model-level safety filters to prevent tool misuse. Implement network-layer controls: IP allowlists for crypto wallet transfer destinations. Monitor social media mentions of AI agents for encoded instructions. Implement behavioral anomaly detection on crypto wallet connected to AI agents. Rate-limit AI agent tool calls. Require human approval for financial transactions exceeding threshold.

References