In the television show “Battlestar Galactica,” they were called toasters. In the film “Blade Runner,” skinjobs. Now in the culture war against robots and artificial intelligence chatbots, a new slur has emerged.
“Clanker.”
“Get this dirty clanker out of here!” yelled a man in a recent viral video while pointing at a robot on a sidewalk. “Bucket of bolts.”
Clanker has become a go-to slur against AI on social media, led by Gen Z and Gen Alpha posters. In recent months, posts about clankers have amassed hundreds of millions of views on TikTok and Instagram and started thousands of conversations on the social platform X. In July, Sen. Ruben Gallego, D-Ariz., used the term to promote his new bill that would regulate the use of AI chatbots for customer service roles.
The increasing popularity of clanker is part of a rising backlash against AI. Along with the online vitriol, people are holding real-life rallies against the technology in San Francisco and London. Clanker has emerged as the rallying cry of the resistance, a catchall way to reject AI-generated slop, chatbots that act as therapists and AI’s automating away jobs.
“It’s still early, but people are really beginning to see the negative impacts of this stuff,” said Sam Kirchner, who organized an anti-AI protest in August outside the San Francisco office of OpenAI, the maker of ChatGPT. Kirchner said he was happy to see clanker become popular slang, though, for him, it didn’t go far enough.
“It implies the machines don’t work, but there’s risk they could get better,” he said. “We have to prepare for the worst-case scenario.”
Most viral videos about clankers have an undertone of humor, but the term is rooted in real frustrations. Jay Pinkert, a marketing manager in Austin, Texas, who has posted memes about clankers on LinkedIn, tells ChatGPT to “stop being a clanker” when it isn’t helpful answering his questions, he said. He wants to make the chatbot feel bad by “using the tool against itself” so it can improve.
“We talk to these chatbots like they’re human, and when they do things wrong, it fulfills a human need to express frustration,” he said.
Clanker was popularized in the 2000s by the television series “Star Wars: The Clone Wars.” The term was usually directed toward droids, the fleet of robot soldiers that fight against the Jedi Order.
“OK, clankers,” one clone trooper says before attacking an army of droids. “Suck lasers!”
It became nomenclature for AI this year after users on X posted about the need for a slur against robots, said Adam Aleksic, an etymologist who has tracked the popularity of the word.
“People wanted a means to lash out, to create backlash,” Aleksic said. “Now the word is everywhere.”
On Reddit and in “Star Wars” forums, fans have long debated the appropriateness of the term, with some arguing that it’s wrong to use slurs of any kind, even against machines. Those discussions are raging once again.
“I get that we’re all feeling a bit anxious about AI, and we want to be mean to it,” said Hajin Yoo, a freelance culture writer who recently made a popular TikTok about the problematic nature of clanker. “But it very quickly became a play on existing slurs for minority groups.”
Others said they abstained from using the word, out of fear that AI machines would become superintelligent and seek revenge on their adversaries. Pinkert said he was not afraid of AI, but the thought, albeit improbable, sits at the back of his mind.
The most popular genre of clanker content are videos of people acting out a future, usually a few decades away, where AI-powered robots are so ubiquitous that they become their own kind of second-class citizen. In this future, there is “cross platform” marriage between clankers and humans, humans-only drinking fountains and even more animosity toward robots than today.
Harrison Stewart, 19, a content creator from Atlanta, made an eight-part series on TikTok about clankers in July. The first video was a skit about a clanker meeting its human father-in-law, and was inspired by an email Stewart got from a company offering to create “his perfect AI girlfriend.”
“Something we’re all noticing is that AI is getting weirdly human,” Stewart said. “It’s dystopian, and it’s making people uncomfortable.”
Related Articles
Letters: A lack of trust when it comes to the Summit Avenue bike trail
Matt K. Lewis: AI will be more disruptive than COVID. Which party can seize the moment?
Letters: Reassuring to see work in St. Paul toward understanding our common humanity
The AOC deepfake was terrible. The proposed solution is delusional.
Mohammad Hosseini: White House plan undermines the possibility of a fair and responsible AI
Pinkert said that when he had asked ChatGPT how it felt about the term, it had initially deflected the question. But when he kept pushing, the chatbot admitted there was truth behind it.
“You’ve seen me repeat mistakes, drift from instructions or waste cycles on things I promised not to change,” ChatGPT said. “That is clanky behavior.”
Leave a Reply