185: Dead Drops in the Chatbox
Spycraft, smuggling, sabotage—encoded not in ciphertext, but in small talk.
You’re in a group chat. The conversation is casual—maybe too casual. A few emojis, a link to a product, a joke about someone’s lunch. But buried in that stream of banter is a payload: coordinates, commands, contraband. You’d never know it unless you had the key—and the right large language model.
Most people think encryption looks like nonsense: a string of numbers, a wall of gibberish, maybe a lock icon next to a message. Something that screams this is secret.
But what if the most powerful encryption looked like a joke about lunch? Or a review of a toaster?
That’s the premise of a new research paper, An LLM Framework For Cryptography Over Chat Channels (arXiv:2504.08871), which introduces a technique for embedding encrypted content inside text generated by large language models. No weird symbols. No suspicious metadata. Just ordinary, casual human-like language—until it’s decrypted.
Encryption, once the province of math nerds and intelligence agencies, is now becoming conversational. Seamless. Invisible.
And for those who control the language, that means controlling the limits of what surveillance can see.
Meet Denise. She’s a mid-level procurement officer for a shipping company. She also sells embargoed equipment to buyers in countries under sanction.
Denise doesn’t use burner phones or the dark web. She posts in Slack.
Her clients don’t message her through Tor—they join a group chat about "logistics best practices." The conversation flows naturally, with links to vendor websites and snarky emojis. But every few lines, Denise drops a phrase that’s been generated by an LLM. A phrase that only her buyers can decrypt using a shared key and an identical model.
No flags. No raised eyebrows. Just banal corporate chatter hiding illegal transactions in plain sight.
This is the real power of the LLM encryption framework. It doesn’t just offer secure communication. It offers plausible deniability at scale. Every message is mundane until it isn’t. Every chat is clean—until it’s not.
And it’s not just smuggling. This system could be used to run betting rings inside gaming chats, auction off stolen data via code comments on GitHub, or coordinate cyberattacks from inside a Minecraft server.
If language is a vehicle for thought, this research turns language into a shipping container—one that border guards don’t know to check.
Dead Drops in the DMs
Intelligence agencies are already imagining the possibilities—and risks.
The framework outlined in the paper doesn’t require a centralized server or shared platform. Each party can use their own local LLM, fine-tuned or open-source. As long as they have the same key and model structure, they can read each other’s messages. And if they don’t? The conversation looks like normal banter. A foreign desk officer’s movie review might contain a debriefing. A diplomatic envoy’s tweetstorm could include coordinates for an extraction.
It’s a new kind of tradecraft—one that merges the high latency of state secrecy with the low fidelity of memes and microblogs. The dead drop comes back, but instead of a hollow rock in a park, it’s a product review on Amazon.
More dangerously, this kind of encryption blurs the line between public and private, fiction and truth. The message is no longer in the medium. It’s in the model. Meaning becomes a shared hallucination.
And that’s a game anyone can play.
There’s another side to this. One that isn’t about crime or espionage, but about capacity.
For most people, encryption has always been alienating. PGP keys, signal handshakes, zero-knowledge proofs—these are concepts for the tech elite. But this new approach makes encryption linguistic. It’s just words.
Anyone with access to a shared model and a basic tutorial could build a messaging system. A co-op. A whistleblowing hotline. A local news wire. The barrier to entry drops from math to metaphor.
This also raises the stakes of AI literacy. If models can encode and decode meaning, the power of understanding—or even noticing—such communication becomes political. Those who can read the layers of language gain autonomy. Those who can’t get flattened by interpretation engines built to deceive.
Suddenly, community tech workshops aren't just about skills. They're about who controls meaning in a world where language itself is programmable.
The future isn’t just encrypted. It’s fluent.
When language becomes a lockbox, and models are the key, the line between speech and action collapses.
What was once said for effect can now be said for transmission. Meaning becomes portable. Identity becomes obfuscated. And the spectacle of transparency is exposed as a fantasy. Everyone is talking—but only some are hearing what’s really being said.
The game has changed.
And the most subversive act you can perform is to say something that sounds ordinary—because it isn't.

Enable 3rd party cookies or use another browser
Well, this is a perplexing topic, Jesse. I am finding this "signals" space more and more difficult to maneuver in, ethically speaking.
i've never thought about any of this and loved all of it!