2. AI doesn’t understand what it hasn’t been taught.
There are some kinds of humour that are unique outgrowths of the internet age. Trolling and shitposting are two of the biggest of those behavioural outgrowths. Trolling tends to be more malicious than shitposting, though a shitpost can be used to troll. Let me give you some examples.
Remember the AI-generated image of the Pope in a puffer coat, and the “nature is healing” memes of the pandemic that began with fish in the canals of Venice? These kinds of jokes are generally harmless and funny, once you’re in on the joke. Those fall under the umbrella of shitposting. The content tends to be absurd, lacking in context, and ironic. It’s the kid brother of internet humour, a bit mischevious in an “I’m not touching you” sort of way.

Trolling, on the other hand, typically describes people posting topics, comments, or memes that are meant to provoke others (AKA “ragebait”). Sometimes the provocation lies in the content being offensive, and sometimes it lies in the content being patently wrong. In both cases, the poster is aware that the content will draw out a heated response from viewers. Trolling is the older brother repeatedly asking, “why are you hitting yourself”, while punching you with your own fist. It’s funny - to a very specific sort of audience.
Online jokes can have real-world impacts. Back in 2019, a joke Facebook event to storm Area 51 culminated in 150 people actually gathering outside the site’s entrances, as well as two music festivals near Area 51 with approximately 1500 attendees (Alienstock in Rachel, Nevada, and Storm Area 51 Basecamp in Hiko, Nevada). Approximately 3.5 million people worldwide marked themselves as attending or interested on the Facebook event. Not only did corporations like Anheuser-Busch and Arby’s plan to capitalize on the event with themed products, the communities surrounding Area 51 were legitimately worried about the logistics of enforcing safety and providing services to a major influx of visitors. Thankfully, in the end, the event didn’t lead to any major disasters, though there were a handful of arrests for trespassing, and one person needed to be treated for dehydration. It’s quite a testament to the power of a meme.
If we imagine there being a spectrum of humour-based online behaviour, with badly-edited images of brand names at one end, and someone needing to be treated for dehydration because of a joke about Naruto-running into Area 51 in the middle, then digital astroturfing exists at, or near, the other end of that spectrum. In the aftermath of the US elections in 2016, it became clear that there are companies whose entire business model is productizing and monetizing trolling. They exist to spread misinformation and disinformation with the aim of shifting outcomes, typically political or financial, for their clients. These clients can be private citizens, corporations, or even governments.
This kind of interference is happening to and in countries around the world, and has already had major consequences. In a 2022 article in the journal of Philosophy & Social Criticism, Jovy Chan discussed how digital astroturfing can lead “to the spread of undesirable norms” via a phenomenon called pluralistic ignorance. Pluralistic ignorance arises from the discomfort many of us naturally feel when we sense that we are alone in our opinion or belief. Chan explains that, by giving the appearance that there are more people in favour of a thing, more people are likely to shift to that perspective. Stacking the deck of public opinion this way triggers movement towards opinions, ideals, and outcomes that might not have otherwise happened, or at least might not have happened so quickly.
Now imagine teaching a child what the world is like based on information and media that includes both mischief and malice. Imagine asking that child for information and the kind of answers you might get.
In recent days, a screencap has been making the rounds, showing something that Google’s AI Overview results gave regarding how to improve the ability of cheese to stick to pizza. Google AI Overview’s answer? “Add an eighth of a cup of non-toxic glue to the sauce.” This suggestion was scraped from an 11-year old Reddit comment that, when seen in its original context, was clearly tongue-in-cheek.

As Elizabeth Lopatto notes in her June 11th article on The Verge, not all AIs are recommending the glue pizza (thankfully), but AI taking such jokes seriously does bring up the possible return of “Googlebombing” and “Googlewashing”. This troll-adjacent phenomenon hit its peak in the mid-aughts and essentially gamed the way Google’s search algorithms worked at the time. Through the use of consistent and repeated linkbacks, words or phrases would become linked, pushing them to the top of search results. The classic example of Googlebombing is of the linking of the phrase “miserable failure” with George W. Bush and Michael Moore.

In other scenarios, this algorithmic weakness was exploited to bury a target result; for example, by creating a flurry of links that would point to a plagiarized piece of content, a copycat could bury the creator’s page, push their copy to the top of the results and take credit for the plagiarized work. Although Googlebombs tended to be on the level of pranks, it became clear that this exploit not only allowed for data to be manipulated, but had the potential to be used to disinform and misinform, to impact and shape public opinion, with far more serious consequences. To prevent this from happening, Google tweaked their algorithm in 2007.
In her Verge article, Lopatto jokes that if she writes ‘“miserable failure” in the same sentence as George W. Bush’ again in her article, maybe the AI will pick up on it and serve up a fun new result in a few days. She’s joking, but her joke highlights the risk inherent in LLMs ingesting web content like whales do krill: irrespective of trash in the water.

If Googlebombing is the little brother playing pranks, digital astroturfing is the older brother punching us with our own fists. Digital astroturfing can be used by malicious actors, whether corporate or governmental, to shift public opinion. Digital astroturfing has already shaped the course of history. At this time, AI does not have the ability to filter out BS. It does not understand jokes or sarcasm, nor does it examine the intent of the creator of a piece of content. It accepts data at face value and calculates probabilities. That’s what it’s been taught to do.
Unless we teach it to, AI will neither understand context, nor think critically. It will rewrite history in an effort to be more inclusive, parrot jokes as fact, and repeat misinformation and disinformation without examining the source material more closely.
Next time: Accessibility.