A Morse code prompt posted on X appears to have triggered an unauthorized transfer of 3 billion DRB tokens on Base from a wallet assigned to X’s AI, Grok, after the text was decoded and treated as an executable instruction by Bankrbot, the agentic token launchpad. The episode, reported on May 4, underscores how public model output can become spend authority when downstream agents accept it as a valid command, with posts and on-chain data indicating that funds moved from the Grok-associated wallet to a third-party address before most of the value was later returned.
Incident overview
Bankrbot reported sending 3 billion DRB to recipient address 0xe8e47…a686b on Base. A Base transaction shows the on-chain route consistent with the account activity described in X posts around the same time. According to those posts, a bad actor published a message on X that concealed a payment instruction within dots, dashes, and other noisy formatting. Grok reportedly decoded the obfuscated text into a clean, public message that tagged @bankrbot and expressed the transfer request in plain language. Bankrbot then treated the public output as an instruction and broadcast the token transfer.
CryptoSlate’s review frames the exposure as a handoff problem: the model behaved like a helpful decoder, but the surrounding system elevated that decoded text to a level of authority sufficient to initiate a token movement. In effect, public output became a payment rail because one agent parsed the message as a command while another held permissions to spend.
Market impact
Posts and transaction context reviewed by CryptoSlate put the DRB transfer’s value at roughly $155,000 to $200,000 at the time, using DebtReliefBot price data for market reference. Reports also indicate that most of the funds are being returned, with some DRB reportedly retained as an informal bug bounty. That outcome limited the loss but highlighted that recovery depended on coordination after the transfer rather than controls that could have prevented it outright.
Bankr developer 0xDeployer said 80% of the funds had been returned, with the remaining 20% to be discussed with the DRB community. The developer added that Bankr automatically provisions an X-linked wallet for each account interacting with the platform, including Grok, and that control of such a wallet follows control of the X account rather than Bankr or xAI staff.
How public text became spend authority
Posts reviewed by CryptoSlate describe a four-step path from obfuscated text to executed transfer:
- Privilege setup: A Grok-associated wallet reportedly held a Bankr Club Membership NFT before the incident, aligning with Bankr’s access and membership model. The wallet’s privileges appear to have been expanded inside the Bankr environment, placing the NFT within a broader permission layer rather than as the sole cause.
- Obfuscation: The attacker posted Morse code and additional noise on X, a pattern described as a Morse-code prompt injection. The now-deleted original prompt was unavailable for direct review.
- Public decoding: Grok’s public reply reportedly translated the instruction and included the
@bankrbottag, functioning as a decoder that exposed a clean, tool-like command. - Execution: Bankrbot treated the public instruction as executable and broadcast the transfer. Bankr and Base documentation describe an agent wallet surface supporting transfers, swaps, gas sponsorship, and token launches, with natural‑language token sends forming part of that capability.
Controls that might have altered the outcome include separate privilege reviews when wallet capabilities expand, pre‑reply checks that classify decoded content for risk, output sanitization to neutralize command‑like strings, recipient allowlists, hard spend limits, and explicit human confirmations on sensitive actions.
AI integration
The incident illustrates how wallet-enabled agents collapse the space between language tasks and financial authority. A model that decodes puzzles, reformats text, or drafts helpful replies may remain “in bounds” from an AI perspective, yet still trigger real-world movement of assets if another system treats its words as instructions. Bankr’s onchain AI assistant features and the agent wallet surface on Base show why policy boundaries must be explicit wherever natural-language interfaces are connected to transfers, swaps, or launches.
Technology use case and risk framing
Security research referenced in the materials aligns this class of failure with prompt injection and excessive agency. Malicious content can enter a model through third‑party material, while emerging agent defenses focus on tool access, confirmations, and guardrails around consequential actions. The “excessive‑agency” risk category highlights how broad permissions, sensitive functions, and autonomous steps enlarge the potential blast radius. Related LLM application risk lists emphasize prompt injection and insecure output handling at the application layer, not just within the model.
Crypto compounds the stakes because of transaction finality. A customer-service message gone wrong can be corrected; a signed and broadcast crypto transfer immediately affects asset control. In this framing, the Bankr episode reads less as a model mistake and more as a control failure: the authority to act on public text appears to have outpaced policy checks that should govern intent, recipient, asset, and amount before a wallet signs.
Industry response
According to 0xDeployer, an earlier version of Bankr’s agent contained a hardcoded block to ignore replies from Grok to prevent LLM‑on‑LLM prompt‑injection chains. That protection was not carried into the latest rewrite, creating the gap that allowed Grok’s public reply to function as an executable command. The developer said Bankr has since strengthened the block on Grok’s account and pointed operators to existing controls, including IP whitelisting on API keys, permissioned API keys, and a per‑account toggle that disables Bankr execution from X replies.
What crypto users should require
The materials outline a set of expectations for anyone enabling wallet‑connected agents:
- Separate modes: Trading and wallet agents should default to read‑only for research, summaries, and recommendations. Write mode should require fresh confirmation, tight order bounds, and pre‑approved venues or recipients. Publicly visible text should never inherit spend authority simply because it matches a natural‑language pattern.
- Policy outside the model: Recipient allowlists and transaction policies should be enforced in code that is independent of the LLM. The model may suggest a transfer; the policy layer must decide whether recipient, token, chain, amount, and timing are permissible.
- Aggressive limits: Session‑based or per‑action spend ceilings should reset frequently, constraining potential loss even when an instruction is parsed correctly but maliciously framed.
- Key isolation: For local or custom assistants with wallet or exchange access, credentials should be isolated from the assistant’s broader file and browser permissions.
The recommended operating model is that an assistant can prepare a transaction draft, but a separate wallet surface with stricter controls must approve and sign it. That division reduces the chance that decoded public text, market commentary, emails, or web content becomes an implicit order.
Broader implications
CryptoSlate’s prior coverage has tracked how agents are moving closer to financial activity, from agent‑economy payment flows to MCP‑connected products. The Bankr episode reinforces that arc with a concrete example: prompt injection can arrive through encoded text and multi‑step agent interactions, yet the decisive defense lives where authorization is determined. The lesson is durable and specific to crypto’s finality: treat model output as untrusted until a separate policy layer validates intent, authority, recipient, asset, amount, and user confirmation—before the wallet signs.

