The Medium Is the Message: What the Dead Internet Theory Reveals About Our AI Future

Dead Internet Theory Enters the Mainstream Debate

On September 3, 2025, OpenAI CEO Sam Altman posted a tweet that referenced the Dead Internet Theory:

“i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now.”

The remark drew attention not because it endorsed the theory, but because it acknowledged the increasingly visible presence of automated accounts on social media. Altman has previously discussed technical solutions such as cryptographic verification of content, so his brief observation was read as another sign that questions about authenticity online are no longer confined to fringe discussions.

Reactions were mixed. Some users highlighted the irony of one of AI’s most prominent figures pointing to a problem his own industry has accelerated, while others treated the tweet as a joke. Media coverage picked up on this tension, emphasizing the gap between Altman’s role in advancing AI and his recognition of its side effects.

What stands out is less Altman’s individual position than the broader shift: the Dead Internet Theory, once a marginal idea, has become a touchpoint in debates about automation, authenticity, and the future of digital spaces.

Taken together, the tweet and its reception illustrate how the Dead Internet Theory has moved from niche speculation into mainstream conversation. The Dead Internet Theory is best understood not as a literal truth, but as a symptom of a much larger shift. It reflects a growing concern about Large Language Models, a new medium that, as Marshall McLuhan's work suggests, is fundamentally reshaping our relationship with information and trust.

1. Acknowledging the Present-Day Problem

Altman’s tweet is notable because it points to the issue as visible now rather than as a hypothetical concern. The spread of LLM-run accounts on X reflects the central idea of the Dead Internet Theory: that online spaces risk being saturated with non-human content.

2. The Irony and the Criticism

The reaction to Altman’s post was swift. Many users noted the irony, with some commenting along the lines of: “My man, you built the foundation for the Dead Internet Theory.” Media outlets like PC Gamer echoed this criticism, comparing it to an arsonist marveling at a fire. The response highlights a central tension: the same companies advancing AI are also grappling with its unintended consequences.

3. From “Solvable Problem” to “Plausible Concern”

In earlier public statements, Altman emphasized long-term technical fixes, such as signing mechanisms to verify human-authored content. By saying the theory now “seems plausible,” he acknowledged the immediacy of the issue. While he did not claim that AI-generated content has definitively drowned out human voices, the comment reflects a shift from abstract optimism to recognition of a present challenge.

What This Means for the Broader Discussion

Altman’s comment does not endorse the conspiratorial aspects of the Dead Internet Theory, such as claims of government orchestration. But it does highlight its central symptom: the internet is increasingly populated by non-human content. When one of the most visible figures in AI suggests the theory “seems plausible,” it signals that concerns about authenticity online are no longer marginal, they are now part of the mainstream debate about how digital spaces function.

What is Dead Internet Theory?

The Dead Internet Theory is often described as a conspiracy theory, but it resonates because its core concern reflects something real: bots, algorithms, and AI-generated content make up a growing share of the internet we encounter.

According to many discussions around the theory, a turning point occurred around 2016–2017. At that time, automated content, fake user profiles, and targeted influence campaigns began to grow more visible, raising concerns about the decline of authentic online voices. Documented cases such as the Cambridge Analytica scandal and evidence of organized bot campaigns during elections are often cited as proof that corporations and governments have, at times, deliberately manipulated online spaces.

Main Ideas:

  • The Proliferation of Bots
    Independent research shows that bots generate a substantial share of internet traffic. Reports in recent years estimate bots account for nearly half of all online activity, ranging from harmless search engine crawlers to malicious bots designed for spam, scraping, or disinformation.

  • The Rise of Generative AI
    The spread of AI tools, particularly large language models (LLMs) and image generators, has intensified fears about authenticity. Because these systems can create human-like text, images, and videos, some worry that online content may increasingly originate from machines rather than people.

  • Repetitive and Unoriginal Content
    Adherents to the theory often point to the recycling of memes, phrases, and discussions across platforms. They interpret this repetition as evidence of automated or scripted content, though it can also be explained by human online culture and algorithm-driven amplification.

  • Algorithmic Curation and Echo Chambers
    Social media and search engines rely on recommendation systems designed to maximize engagement. These systems are known to create echo chambers. Proponents of the theory argue that algorithms also shape public narratives in ways that can suppress dissenting views, pointing to scandals like Cambridge Analytica as evidence of targeted manipulation.

Origins and Evolution

The dead internet theory is believed to have originated in online forums and imageboards in the late 2010s and gained more mainstream attention in the early 2020s. It has evolved from a niche conspiracy theory to a more widely discussed topic, particularly with the explosion of generative AI. While initially focused on the idea of a "hollow" internet devoid of real people, the theory has expanded to encompass concerns about the nature of reality and the erosion of trust in the digital age.

Counterarguments and Criticisms

Despite its growing popularity, the dead internet theory is not without its critics. Many view it as an exaggeration or a "paranoid fantasy." While acknowledging the real issues of bot activity and online manipulation, they argue that the claim that the majority of the internet is "dead" is not supported by concrete evidence.

Skeptics point out that the internet is a vast and diverse space, and while some areas may be heavily automated, there are still countless communities and platforms where genuine human interaction thrives. They also argue that the theory often conflates different types of bot activity, lumping in harmless automated processes with more sinister forms of digital manipulation.

Ultimately, while the more extreme claims of the dead internet theory remain unproven, it taps into a growing sense of unease about the direction of the digital world and the increasing difficulty of distinguishing between authentic human expression and artificial or misleading content.

Historical Context and Parallels

Throughout history, every leap in communication technology has brought both opportunity and anxiety. Each era has followed a familiar cycle: a surge of low-cost content, new forms of manipulation, public unease about authenticity, and eventually the development of countermeasures in the form of technical fixes, legal frameworks, or cultural norms.

Socrates worried that writing would erode memory and provide only the “semblance of wisdom.” Centuries later, the printing press broke the Church’s monopoly on knowledge while also unleashing pamphlet wars, hoaxes, and propaganda. Publishers eventually introduced imprints and mastheads as markers of credibility. The nineteenth century saw the penny press and yellow journalism prioritize sensational stories over accuracy until libel standards and professional ethics helped restore trust. Telegraphy accelerated both facts and rumors, pushing wire services toward neutral style guides. In the twentieth century, radio and television concentrated persuasive power in the hands of states and corporations, which sparked regulation and media-literacy campaigns. The early web was quickly overwhelmed by spam and low-quality content farms until spam filters, CAPTCHAs, and search-quality updates shifted the balance. Social platforms later introduced algorithmic feeds, bot campaigns, and scandals such as Cambridge Analytica, once again sharpening anxieties about manipulation.

Seen in this longer arc, the Dead Internet Theory is less an unprecedented phenomenon than a modern reformulation of age-old fears about authenticity and control. What makes the present moment distinctive is the scale, speed, and sophistication of AI-driven automation. In the past, communication technologies spread one message to many, such as a pamphlet, a radio broadcast, or a television program. Today, AI systems do not simply broadcast. They interact with each individual, learning from clicks and preferences in order to deliver a uniquely persuasive stream of content designed to capture attention and shape behavior.

Earlier eras were constrained by human bottlenecks. Content creation required authors, editors, printers, and producers, all of which placed a physical ceiling on output. Now, generative AI can produce plausible text, images, audio, and code at superhuman scale and at negligible cost. Distribution has also changed. Where pamphlets and books once spread only as fast as physical transport allowed, today an AI-generated post or video can reach millions of people worldwide within minutes.

Finally, propaganda and forgeries have always existed, from exaggerated stories in newspapers to doctored photographs. But convincing fabrications once required skill and resources. The advent of deepfakes collapses the old boundary between reality and illusion. Ultra-realistic video and audio can now be created by anyone, making it increasingly difficult to trust even our most basic senses. A fabricated speech by a world leader or a fake plea from a family member can be made to look and sound indistinguishable from the real thing.

In this sense, the Dead Internet Theory expresses familiar concerns through new technologies. The underlying human fears about truth, authenticity, and manipulation are timeless. What has changed is the unprecedented scale, speed, and intimacy with which those fears now play out.

When Technology Shapes How We Think

Marshall McLuhan’s phrase “the medium is the message” highlights that technologies matter less for the content they deliver than for the ways they reshape perception and social organization. The printing press didn’t just distribute books, it standardized language, fostered individual reading practices, and helped create the conditions for nationalism and scientific culture. Television didn’t just show programs, it made speed, simultaneity, and image-based persuasion dominant cultural forms.

Large language models can be seen in this light. Rather than neutral assistants that simply provide information, they function as a medium: changing authorship, and setting new expectations for communication speed and style.

In McLuhan’s terms, the message of LLMs isn’t the specific text they produce, but the broader cultural shift, we’re getting used to treating language that doesn’t come from lived experience as normal communication.

McLuhan also warned that every new medium reshapes our sensory and cognitive balance. For LLMs, the shift is from evaluating authority through context (who is speaking, what their stake is) to evaluating it through form (does this text sound coherent, does it resemble trusted genres). The risk is that coherence itself becomes the new marker of truth, regardless of source.

Conclusion: LLMs as a Medium and the Challenge of Adaptation

McLuhan’s idea reminds us that technologies reshape perception and social norms as much as they convey content. The printing press altered how societies thought, television redefined authority through image, and social media elevated virality over reliability. Large language models belong in that lineage. Their message is not any single answer they generate, but the cultural shift they bring.

This is why literacy matters. Media literacy equips us to see the biases and manipulations of every medium; futures literacy asks us to anticipate how those dynamics might evolve rather than reacting after the fact. If we continue to treat LLMs as neutral assistants rather than as a powerful new medium, we risk letting them set the conditions of discourse before we even realize the shift has occurred.

Imperva’s 2025 Bad Bot Report shows bots now account for 51% of all web traffic, a milestone that makes the “Dead Internet Theory” feel less far-fetched. But declaring the internet dead misses the point. What’s really changing is how hard it is to recognize human voices within automated fluency. The challenge ahead isn’t survival, it’s literacy.

Next
Next

Beyond the Machine: Reclaiming Human Knowing in the Age of AI