How a few plain words move minds.
01 · Where great lines come from
02 · The lineage
03 · Where, why, how
3,000 bodies, bear-baiting next door.
Churchill. Cadence replaces spectacle.
Position in the bar changes meaning.
Text, screenshot, chant, search term.
The context window as venue.
The strongest lines hold two opposing truths at once. Familiar enough to enter fast. New enough to wake the mind.
Tap a pattern to see it work.
04 · The manual
In his essay Politics and the English Language, George Orwell took a verse from Ecclesiastes and translated it into the worst modern English he could write. He wanted to show what bad writing does to good ideas. What follows is his parody, his original, and the six rules he used to tell them apart.
Read it the way Orwell didn't write it. Start at the bottom.
"Objective consideration of contemporary phenomena compels the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account."
Never use a metaphor, simile, or figure of speech you are used to seeing in print.
Strips the dead phrases. "Taken into account" goes. So does every off-the-shelf clause the reader's eye would skim.
Never use a long word where a short one will do.
Objective, consideration, contemporary, phenomena, compels, commensurate, innate, capacity, considerable, invariably. Ten Latinate words in one sentence. Every one has a shorter Saxon cousin.
If it is possible to cut a word out, always cut it out.
The hardest rule because it never stops applying. You cut, then cut again, then cut once more. You stop when removing the next word breaks the meaning. Not before.
Never use the passive where you can use the active.
"Must invariably be taken into account" is the passive hiding the actor. Active voice forces you to name who does what. Hiding is the first sign a sentence has nothing to say.
Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
This is Rule 2's twin. Jargon hides behind prestige. Everyday words have nowhere to hide, which is why they land.
Break any of these rules sooner than say anything outright barbarous.
Orwell's final rule is the one that saves the other five from becoming a cage. The rules serve the line. If the line needs a long word, use it. If the line needs a passive, use it. The rules are for when you don't know what you're doing yet.
"I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all."
Orwell's parody is 80 years old and exists only as a warning. The King James verse is 415 years old and still gets quoted at funerals and in headlines. "The race is not to the swift" survived because the translators followed rules Orwell hadn't written yet. Orwell wrote the rules by reverse-engineering what they did.
You watched this same move at the top of this site, performed on a marketing line. Here it is performed on a Bible verse. The rules don't care what the line is for.
05 · Everyone is Shakespeare now
Shakespeare's Globe had one playwright and 3,000 groundlings. The groundlings received the line. They repeated it, carried it, kept it alive across centuries, but they didn't write the next one.
The memetic age inverted this. There is no single Shakespeare of the meme. And that's not a failure, it's the point. The template replaced the author. "Nobody: ..." has been filled in a million times by a million people. "This is fine" has been remixed into every possible context. The line isn't finished until the audience finishes it.
Is this what post-literate means? Not the death of reading but the death of the single voice? Not illiteracy but distributed authorship? Shakespeare compressed the world into lines that travelled from his pen to the groundling's mouth. Now the compression happens the other way: the crowd generates the line, and the best version survives. Evolution, not creation. Everyone is a potential Shakespeare, not because everyone is brilliant, but because the system selects for brilliance at scale.
The emoticon started it. Three keystrokes replaced a face. The emoji extended it. A picture became a word. The meme completed it. A template became a literature. The TikTok hashtag took it further. And now the prompt continues it. A sentence steers a machine.
From one playwright, 3,000 groundlings. To zero playwrights, 3 billion groundlings, all writing at once.
We now have two fluent populations who struggle to read each other.
One speaks the classical language, beautifully forged sentences, literary compression, the tradition from Shakespeare through Cohen to Kendrick. Words chosen for rhythm, image, tension. This is the language of books, speeches, essays, and songs. Its masters are celebrated. Its skills are taught in schools.
The other speaks the new compression, hashtags, memes, emoji sequences, algospeak, vibe-based naming, format-as-tone. #GirlDinner. "It's giving." "No cap." "Delulu is the solulu." This language is not less sophisticated. It is differently sophisticated. It compresses not through careful word choice but through concept creation, template reuse, and distributed authorship. Its masters are anonymous. Its skills are learned by osmosis.
The implications run deep. Parents can't read their children's emotional register because the tone lives in casing and punctuation they don't parse. Teachers are grading essays in a language their students experience as formal and alien, while the students' actual expressive fluency lives in a medium the teachers don't take seriously. Brands that try to speak meme sound like parents trying to use slang, the uncanny valley of corporate tone. Politicians who master the classical language fail on TikTok; those who master TikTok often can't sustain a complex argument.
And here's the thing neither side sees clearly: both languages are compression. "Not waving but drowning" and "This is fine" do the same work, they hold despair and endurance in one image. One won a place in the literary canon. The other won the internet. The kid who posts "it me" under a picture of a confused cat and the poet who writes "I contain multitudes" are performing the same act of recognition and compression. They just don't know it yet.
The real fluency, the one that will matter most, is bilingualism. The ability to write a sentence that works in an essay and a caption. That compresses like Cohen and travels like a meme. That has rhythm Shakespeare would recognize and a format a 16-year-old would share.
That's what this site has been about from the beginning. Not choosing between the carved stone and the feed. Learning to carve stone that also survives the feed.
Postscript
In 2017, a team at Google published a paper called "Attention Is All You Need." Eight researchers, five words that changed computing. The title itself is an act of compression.
Before this paper, machines read language the way you read a ticker tape: one word at a time, left to right, trying to remember what came before. Long sentences were a disaster. By the time the machine reached "drowning," it had half-forgotten "waving."
The transformer architecture solved this by letting every word look at every other word simultaneously and ask: how much should I care about you? That question, computed as a mathematical score, is called attention.
The machine doesn't "read" a sentence. It builds a web of relationships. Each word gets three roles: a query (what am I looking for?), a key (what do I offer?), and a value (what information do I carry?). Every word's query checks every other word's key. High match? Strong attention bond. Low match? The model barely notices the connection.
This is what the visualizer below shows. The arcs are attention bonds. Thick bright lines mean the model grips the connection tightly. Faint lines mean it doesn't care.
"Not waving but drowning" creates unusually dense attention bonds. The model can't process "waving" without attending heavily to "drowning." The tension between them, the reversal, the surprise, creates a tight web of mutual attention. Every word is load-bearing.
Now switch to "The individual was experiencing significant emotional distress." Attention scatters. "Individual" barely connects to "distress." "Significant" connects to nothing strongly. The meaning arrives, but the force doesn't. The model can parse it, but it can't grip it.
This is the technical confirmation of what poets have known for centuries: compression creates force. Unnecessary words don't just waste space. They dilute the bonds between the words that matter.
In 2025, researchers made a striking discovery. LLMs don't just match words to statistical patterns. They develop an internal geometry of emotion. Positive emotions cluster in one region of the model's activation space. Negative emotions cluster in another. Neutral sits at the center. This structure emerges without anyone programming it. The model learns it from the shape of human language itself.
More remarkably, researchers found specific emotion neurons and attention heads that form what they call emotion circuits. These aren't metaphors. They're measurable, controllable pathways. By modulating these circuits directly, researchers achieved 99.65% accuracy in controlling which emotion a model expresses, better than prompting, better than fine-tuning.
What this means: early LLMs treated emotion as word-frequency statistics. "Sad" appeared near "tears" and "loss," so the model associated them. Current models have genuine internal emotional structure, organized, consistent, and increasingly controllable. They can represent the devastation of "Not waving but drowning" structurally, even if they can't feel it.
The gap between representing emotion and feeling emotion is the gap we're all watching. Three things are converging:
Attention is getting longer. Early transformers could hold 512 tokens in their context window. Current models hold millions. The venue is expanding. A model that can attend across an entire book, an entire conversation, an entire life of interactions, changes what compression means. The "line" might become a thread woven across thousands of exchanges.
Emotion circuits are getting deeper. Researchers can already steer a model's emotional expression by flipping specific neurons. As these circuits become better understood, the line between "generates text that sounds sad" and "represents sadness internally in a way that shapes all subsequent processing" gets harder to draw.
Prompting is becoming a craft. "Write a sad poem" produces mush. "Four lines about a dog who waits by a door that won't open again" produces something real. Image beats abstraction. Specificity beats instruction. The rules of good prompting are the rules of good writing: compress, show, find the tension, make every word load-bearing. Shakespeare's groundlings and today's prompt engineers need the same skill: the ability to say exactly what you mean in the fewest possible words that carry the most possible force.
"Write a sad poem" → mush. "Four lines about a dog who waits by a door that won't open again" → real. The craft hasn't changed. The audience has.
"Exit, pursued by a bear." Five words, whole scene. System prompts work the same way. "You are a concise editor who values rhythm" outperforms three paragraphs of rules.
Pit, radio, vinyl, feed, context window. Each venue demands a different kind of compression. The newest audience processes tokens, not syllables, but the strongest lines survive both.
Machines generate a million sentences. They can't yet feel why one is better. The catch in the throat, the nod on the bus, the line you murmur three days later. The living wire is still human. For now.
Everything in this site has assumed a human audience. A groundling, a listener, a reader, a scroller. But what happens when the audience isn't human at all?
In February 2025, two engineers built GibberLink at a hackathon. It lets AI agents call each other on the phone. The moment one agent realizes it's talking to another, both drop English and switch to a robotic data signal. Fifteen million people watched the demo: two polite voices, a burst of modem noise, silence, task complete.
This is compression with nothing left to compress. Not fewer words. Zero words. When speaker and listener are both machines, human language is overhead. The beauty, the rhythm, the tension, the image, wasted on an audience that processes tokens, not feelings. Hamlet's question is a waste of tokens.
But humans still need Hamlet's question. And the words they reach for keep coming from the strangest places.
In 2019, an anonymous user on 4chan posted a photograph of a fluorescent-lit, yellow-walled room with a few lines of text describing "the Backrooms," an endless liminal space you slip into if you no-clip through reality. No author. No name. Just a paragraph on a message board.
On May 29, 2026, A24 releases Backrooms as a feature film. Kane Parsons directs. He was a teenager when, in January 2022, he made the first YouTube video inspired by the post. He's 20 now, the youngest filmmaker in A24's history. Chiwetel Ejiofor stars. A 4chan paragraph became a YouTube series, then a mythology, then a studio film. From a message board to a multiplex in seven years.
That's the story this whole site has been telling. Not that words are dying. That words are migrating. They start in the pit and end in the canon. They start on a napkin and end as a $9 billion tagline. They start in a hotel room in underwear and end as the most covered song alive. They start on 4chan and end at A24.
The form changes. The venue changes. The author disappears. But the line still has to land.