Who’s Really Online? A Look at the Dead Internet Theory
A robotic hand touching a virtual network. Photo by Tara Winstead via Pexels.
The internet is a widely accessed resource, with over 5.35 billion users around the world searching, uploading and downloading content. Despite the large number of users, that doesn’t necessarily mean everyone we see online is one of these people. This increase in bot activity is seen as one of the many signs of the “Dead Internet Theory.”
The term, as defined by The Atlantic, explains that “The internet has changed from information being more freeform into information being shown to you being controlled by fake accounts and artificial intelligence.”
The article further states that although websites like X were once perceived as a single space, modern AI now controls what users see. This has transformed these platforms into a collection of isolated, disconnected rooms that users can't escape from and may not even realize they are in.
Several social media platforms, such as Instagram, YouTube and Facebook, have seen growth in accounts managed by computers, going from 43% in 2013 to 49.6% in 2023. A calculated 32% of these bot accounts have been using fake accounts to scam people out of data and money.
Algorithms are a large part of deciding the content we see. Algorithmic review of content has been in practice for a few years, and was demonstrated in 2019 when Facebook attempted to remove all posts showing a terrorist shooting in a New Zealand Church.
In a report from the Meta Newsroom, Chris Sonderby, VP and Deputy General Counsel for Meta, stated that Meta removed approximately 1.5 million videos of the attack globally. “More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services,” he added.
These videos were preemptively removed by algorithmic commercial content moderation, which is defined by a Sage research study as “systems that classify user-generated content based on either matching or prediction, leading to a decision and governance outcome (e.g. removal, geoblocking, account takedown).”
While this technology is used primarily for determining content that breaks the specific site's rules for what can be shared. Additionally, there is no limit for what each website can consider a violation, and no rule claiming that content moderation is applied equally to all users in terms of what they can see.
Content developed by artificial intelligence has also been growing following the prominence of generative AI tools. When these images are liked or viewed, the algorithm can boost them so they can be viewed by significantly more people. Even with fake images, the impact this can have on people is real, with many people often believing these images to be real, and thus impacting how people see celebrities and political figures.
One Marist University sophomore, who wishes to remain anonymous, has felt the impact hard. “As someone who’s very active in online spaces, I’ve definitely noticed the increase of bot accounts,” they said. “On X they’re incredibly prevalent in replies. There’s no point in checking the replies of a popular post because it’s like half bots. I imagine bot activity will only increase with time, so yeah, it makes social media feel incredibly braindead.”
With an established 5.35 billion users, the hesitation to believe that the internet is run solely by bots is understandable. “It seems like it has some validity, but we use the internet everyday so I’m not sure how it’s only bots," said Melissa Chodziutko, lecturer of computer science at Marist.
No matter the case, with the rise of bots and AI-generated content, it’s always a good idea to make sure the content is human and real before interacting with it. While the way algorithms work and the rise of online bots can’t be controlled, users can take responsibility for staying informed of how they interact with the digital world.