How the internet changes English
Techno-linguistics
The internet, technology and AI all change the way we live and work. But these technologies also change the way we speak.
Sometimes in obvious ways: words like app and browser either didn’t exist or had different meanings 50 years ago.
Other times it’s less obvious. For some reason, since the advent of the internet, we use the words “may” and “shall” a lot less.
Here are three more examples.
In-group markers
“AIBU? FIL is refusing to go NC with his CF brother. Now DH says I’m OTT. LTB?”
Mumsnet is older than Facebook, Twitter, Reddit and Wikipedia. It is a relic from a simpler era of the internet: when blogs and forums were the closest thing that existed to social media.
Its users, nearly all of whom are mums but with a handful of dads, are a tight-knit and often vulgar community (and surprisingly right-wing to boot). The WI it ain’t.
Over 25 years, its users have developed their own language: distinct from any other place on the internet. Your DD is your daughter, your FIL and MIL are your in-laws. Posts ask simply ‘AIBU?’ Shorthand for ‘Am I Being Unreasonable?’.
Reddit users have their own version of Am I Being Unreasonable: AITA: Am I The Asshole? To which responses can be: YTA (you are) or NTA (not).
But why do these norms evolve on communities like Mumsnet and Reddit? This isn’t about using language efficiently, despite what the users might claim, it’s about using language as an exclusionary device. Acronyms are the fastest way to filter out the newbies. The use of this sort of code is a form of tribalism: signalling to one another that they are part of the in-crowd and scaring off the non-members.
We do the exact same thing when we speak, using our accent. If you had a thick Essex accent, and spoke to a Glaswegian that you liked, you might both (subconsciously) shift your accent to a more neutral position. But if you didn’t like each other, you would heighten your Essex accent in an attempt to make yourself indecipherable, signaling your group membership.
Mumsnet users (and members of every internet community) are doing the exact same thing with slang, acronyms and in-group language. It’s a signal to passers-by and tourists: back-off.
Codewords can also be used as a secret signal to other members of your cause. Academically, these have been called “multivocal appeals” but more of us are familiar with the phrase “dog-whistles”.
A user might post for example, about the ‘Boriswave’ (the influx of migrants that came to Britain under Boris Johnson) or call their political opponents ‘NPCs’ (non-playable characters, another word for normies or regular people that follow the crowd).
These phrases broadly make sense to all of us, but they are also a signifier that the speaker is part of the extremely online alt-right. In that sense he (and it usually is a he) makes himself known to other members of the in-group, while continuing to make sense and appeal to the out-group.
It’s the inverse technique of the mums on mumsnet and the nerds on Reddit, used to different effect: in-group language to signify, rather than exclude.
Algo-speak
In 1996, AOL blocked people from Scunthorpe from creating an email account.
Why? AOL had implemented a profanity filter to stop misuse, and it was blocking a specific string that appears in the word Scunthorpe (if you’re still missing it, check between the S and the horpe).
This is a difficult problem to solve. At one of my first jobs in tech, I was tasked with creating a filter that would block abusive language without restricting anyone’s natural speech (ie: allow “fuck that”, but block “fuck you”). It ended up relying on a very long document of very rude phrases: and yet creative users still found ways around it.
In the age of AI, filtering and censorship is better - it can look at context, not just words. But users are still smart, and they still find ways around it.
TikTok content creators long suspected that their more controversial videos were being buried by the algorithm: hidden from users and restricting their reach.
And so these TikTokkers found creative ways around the filter. Rather than killing someone, you “unalive” them. You might come out as a “le$bian”, and talk about your “seggs” life. You might comment “🍉”, to signal your support for Palestine.
These terms have become known as algo-speak, a way of talking and writing that is supposedly favourable to the algorithm.
Of course, people who have built their careers on algorithms end up treating those algorithms like mysterious deities, to whom they must throw certain ritualistic sacrifices and avoid heresy or blasphemy (for a while, TikTokkers would refuse to say ‘YouTube’ in their videos, for fear that the algorithm could hear).
Unfortunately for them, if the algorithm is this smart (which it very well could be) - it is not being fooled by “seggs”.
But something strange has happened over time. These algo-speak words, that were once practicalities created to avoid Big Brother, are now becoming euphemisms that Gen Z prefers. Just as it is more comfortable for us to talk about someone “passing away”, as opposed to the blunt “died”, young internet users are finding talking about someone becoming “unalived” is just a nicer way of saying something pretty unpleasant.
(By the way, I could do a whole post on the evolution of euphemisms. “Water Closet” started as a euphemism, and then was replaced by “toilet” from the French word for a cloth. When toilet started to sound rude, we created others like “loo” or “restroom”. Steven Pinker coined this the euphemism treadmill).
Dead Internet and the AI effect
For just about all of human history, kids have tried to be different from their parents. That’s true of fashion, music, humour - and it’s true of language.
That’s the reason that slang evolves over time. From the trendy 1920s kids that called things the ‘bees knees’, to the 2020s kids that might call the same thing the GOAT. In recent years, the skull emoji (💀) has replaced the laughing emoji (😂) as the signifier that something is funny.
It’s another example of language as in-group markers - and in this case identity formers.
But the advent of AI will create a new paradigm. All of us will try, deliberately, to differentiate ourselves from Large Language Models (LLMs).
ChatGPT, Gemini, Claude and the rest all have subtle and not-so-subtle ‘tells’ that give them a distinct sound and style.
“The author is not a human being, but a ghost — a whisper woven from the algorithm, a construct of code. A.I.-generated writing, once the distant echo of science-fiction daydreams, is now all around us — neatly packaged, fleetingly appreciated and endlessly recycled. It’s not just a flood — it’s a groundswell.”
(Example stolen from the New York Times)
Structures like “it’s not a… but a…” are dead giveaways that a piece of writing was created by an LLM. The words quietly or whisper or silently are common tells. And of course, the famous em-dash.
And just like teenagers wanting to sound less like their parents, human writers and speakers want to sound less like our robot overlords. Many writers are now omitting em-dashes from their writing in fear that they’ll be accused of using AI (fortunately for me, I’ve always incorrectly used the shorter en-dash as the parenthetical marker - it’s always been grammatically wrong but now you at least know what’s written by me).
The Dead Internet Theory originated in 2016, and is a conspiracy theory that claims the vast majority of internet activity is bots and automatically generated content. The most inane Facebook posts seem to get tens of thousands of likes. Who is liking this crap? The answer, according to the Dead Internet Theory: nobody. It’s created by bots, amplified by bots, seen by bots. And you are one of a tiny number of humans actually engaging with it.
So language will almost certainly evolve to set us apart from the AI zombies. This is part of the reason trends have shorter shelf lives: as soon as LLMs (and brand accounts) start picking up on words, they are basically useless to users.
The Turing Test is a famous experiment that asks if a robot can convince a human user that it is also human. When it was first imagined, humans imagined that a semblance of coherence represented consciousness. Now we’re preparing for the reverse Turing Test: in a world of bots that sound conscious, can a human convince a person that they’re really human?
With or without the internet - we’ve always used language to help form our identity, mark ourselves as members of a group, avoid cliches and lean on euphemisms. For as long as there is language, that will never change.





Have you ever put your memo in AI?
Interesting subject