The History of Brainrot

May 4, 2026

Over the weekend, I had the opportunity to present my capstone project at Chapman's annual University Honors Conference. For the past three months, I've been researching the history of the word "brainrot" with the goal of understanding how media technologies transform how we communicate information. I would like to thank Jon Humphreys for serving as my faculty mentor for this project.

Lowkey Cooked or Highkey Cooking: The History of Brainrot

Abstract

This work examines the cultural and linguistic development of the term “brainrot” since its introduction to the present day. While the word has recently gained significant popularity with younger generations to describe low-quality social media content, its broader historical context sheds light on several deeper themes. Drawing on previous literature in memetics, media theory, and popular culture in the digital age, it aims to provide a robust timeline with multiple interdisciplinary perspectives. It argues that the current iteration of brainrot is the result of a series of technological and cultural evolutions over the last couple of centuries. By analyzing how communication has evolved at various points in modern history, a detailed evolutionary timeline of the term is presented. This work concludes with an analysis of the potential effects of today's brainrot content, including whether current fears from the academic community are truly justified or simply hyperbolic interpretations of yet another rapid transformation in how cultural information is shared.

Introduction

Your reaction upon reading the word “brainrot” in an academic essay likely depends on whether or not you were born in the current century. For those who identify themselves as members of Generation Z and Alpha, the word likely evokes a mysteriously humorous conglomerate of memes, vocabulary, and content generated by artificial intelligence that is commonly known as “AI slop.” The rest of you are more likely to be worried it is a new kind of psychological disorder causing serious neurological decay. Studies revealing several negative cognitive side effects linked to excessive screen time, including mental fatigue, shortened attention spans, and heightened anxiety levels, have led some to believe that brainrot is “the problem of the century” (Yazgan 2025). However, examining the cultural origins and linguistic evolution of the word lends valuable insight into whether these fears are truly justified or simply a symptom of humanity's innate fear of change.

Background

Though it was named Oxford's Word of the Year in 2024, indicating its current sociocultural significance and internet virality, the term first appeared in literature that is over 150 years old (Oxford University Press). In 1854, Henry David Thoreau was obviously not referring to viral social media trends in his seminal work, Walden, but he rather introduced a much more fundamental aspect of the human experience. Thoreau coined the term to describe our unhealthy tendency to leap to conclusions that ignore a potentially unpopular argument's deeper meaning. He reasoned that all language has multiple meanings and interpretations, and therefore must always be carefully and seriously considered, rather than dismissed. His words cautioned against the proliferation of shallow-minded thinking, which he believed had already spread “widely and fatally” (Thoreau 168). “If a man does not keep pace with his companions, perhaps it is because he hears a different drummer,” Thoreau continued (168). As the word's creator, his work yields relevant insight into the term's original meaning and gives a reference point from which to examine its linguistic evolution. Since then, the semantics of the word have changed significantly. The full current definition of “brain rot” (which is used interchangeably with “brainrot”) according to Oxford University Press is “the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration” (2024). Essentially, the modern definition of brainrot is used to simultaneously refer to cognitive decline caused by excessive media consumption and the kinds of content that are thought to lead to such deterioration. This is an important distinction that requires us to approach the topic with an informed and nuanced perspective.

Memetics, Virality, & Phatic Expression

In The Selfish Gene, evolutionary biologist Richard Dawkins notes that language evolves “orders of magnitude faster than genetic evolution” (200). He argues that genes are nothing more than replicating entities, or “replicators,” that allow a species to survive and reproduce (Dawkins 202). He goes on to define a “meme” as a new kind of replicator that relies on imitation to convey cultural information (Dawkins 203). It follows that the creation and sharing of memes accelerates the spread of cultural information across a population. Just as our methods of communication have evolved over time, the idea of what a meme can be has changed too. Dawkins theorized that memes could be songs, ideas, or fashion trends (203). Nowadays, the word is often associated with images and drawings accompanied by a humorous caption that encapsulate a specific yet relatable aspect of the human experience. Memes are spread from one person to another through imitation and mutation, with only the most compelling ideas having a truly widespread and everlasting cultural impact. Dawkins names the trait that drives memetic endurance “survival value,” which is fueled by the inherent psychological appeal of the information (204). An analog to this idea that maps relatively cleanly is Charles Darwin's survival of the fittest theory of genetics.

The proliferation of global internet access brought about a uniquely transformative evolution in how information can be communicated. Communications professor and memetics researcher Limor Shifman describes the digital transfer of cultural information as the “Internet meme” (2). She notes that Internet memes commonly reference one another in interesting and unexpected ways, an attribute she refers to as “intertextuality” (Shifman 2). Shifman later refines her definition of Internet meme to encapsulate a closely related “collection of texts” that encompass a unit of cultural information (56). Given the ever growing production and consumption of Internet content, there is clearly more room for interdisciplinarity than ever. Web 2.0 enables individual users to post user-generated content to the public through social networking platforms (Shifman 18). Individuals who find a meaningful connection between different pieces of content can take part in the “participatory culture” of the Internet by sharing their perspective with others (Shifman 4). Given that social media platforms are built for the express purpose of facilitating the consumption and sharing of digital content, it is no surprise that these platforms have become the most common way to consume and share Internet memes. Furthermore, robust support for a variety of multimedia formats (text, photo, video, GIF, etc.) fosters the invention of new styles of memes.

Shifman argues that memes spread through two modes of transmission: mimicry and remix (20). Mimicry involves recreating a popular piece of content, while remixing requires a meaningfully additive modification of the original content to be made (Shifman 20, 22). These methods are essentially a modern interpretation of Dawkins' original theory of memetic replication through either imitation or mutation. Following Darwin's survival of the fittest framework, particularly successful pieces of content that are considered “viral” are those that experience unusually rapid and widespread proliferation caused by individuals sharing content across multiple social networks (Shifman 55). Fundamentally, all viral content has to be unique or exceptional in one way or another. The rare but extraordinary success that viral media experiences results from its ability to tap into innate human emotions that are shared by everyone. Shifman argues that a piece of content's virality can be enhanced by several simple yet powerful factors: “positivity, provocation of high-arousal emotions, participation, packaging, prestige, and positioning” (66). Viral content is able to break through cultural, political, and linguistic barriers because it is universally appealing to fundamental aspects of the human experience. Formulating connections with friends and strangers alike in the vast digital world often means finding common ground upon which everyone can relate. Successful memes leverage a shared experience, belief, or emotion that unites diverse audiences. Furthermore, Shifman argues that the most popular Internet memes more often than not appear “unfinished, unpolished, [and] amateur-looking” because it “invites people to fill in the gaps, address the puzzles, or mock its creator” (87). In other words, compelling memes take advantage of the Internet's participatory culture by subconsciously persuading viewers to respond in either a positive or negative way. The result is a unique, rich cultural dialogue that spans across continents, generations, and languages and which unfolds in real time. The highly individualized nature of modern digital content means that media increasingly features raw, unfiltered, and spontaneous perspectives. No other cultural phenomenon has the ability to quickly impact a widespread population quite like the viral Internet meme.

The ever-growing population of people with reliable Internet access brings with it a constantly increasing library of content. With a virtually endless landscape of digital media, there is inevitably a high degree of variability in the perceived quality of available content. The authenticity of digital social networks has also been called into question. Vincent Miller, a professor of Sociology and Culture Studies, believes that social media has forced us into a “postsocial” society grounded in artificial social networks and mediated by digital objects rather than traditional communities produced by genuine interaction (394). While I concede that interaction via social media is dramatically different from face-to-face communication, I think Miller's argument is an unnecessarily extreme interpretation of the situation because of key similarities that still prevail. One example is the presence of phatic expression, which, like verbal small talk, involves communicating with others for the sake of social interaction, rather than for the purposes of a meaningful exchange of information (Blommaert and Varis 32). One might assume that such “communication without content” is simply useless noise that lacks any substantive contribution to cultural conversations (Blommaert and Varis 31). In the context of memetic dissemination, however, phatic expression serves as the social heartbeat for networks interacting with Internet memes. It signals an individual's availability and interest in engaging in communication on a given topic. Since memes are cultural units of information made to appeal to the human experience, they are surrounded by phatic communication, just as any other concrete topic of discussion needs an approachable conversation as its entry point. Digital small talk can be thought of as the glue that holds social networks together. Phatic communication on social media comes in the form of likes, dislikes, emoji reactions, and some styles of reactionary comments. It is through these digital phatic interactions that we are able to gauge the public's interest in a piece of media. I agree with Blommaert and Varis' conclusion that studying these modern forms of interaction on social media requires a nontraditional concept of socialization that recognizes the platforms' inherent strengths and weaknesses.

Historical Media Transformations

Communication mediums have gone through several dramatic transformations since the Industrial Revolution, shaping how information is captured, stored, and transmitted. Technological advancements like the invention of the telegraph, radio, television, and the Internet have each led to drastic increases not only in the amount of information people have access to, but also to the speed with which it can be accessed. Philosopher and media theorist Marshall McLuhan, who is widely regarded as the father of media studies, argues that when it comes to humanity's most influential inventions “the medium is the message” (McLuhan 7). What matters about a technology is in fact not what the device actually does, whether it is a light bulb or railroad or airplane, but rather its cumulative effect on advancing society. Each of these mediums are “extensions of man” that build upon each other to augment our innate practical abilities (McLuhan 7). This perspective shifts our focus from studying the actual content transmitted by a given medium to the nature of the medium itself. It is only through media meta-analysis that we can effectively understand the nature, evolution, and ultimate impacts of a medium. A crucial aspect of McLuhan's unique view of communication media is summed up in a strikingly simple yet profound statement made early on in his seminal work, Understanding Media: The Extensions of Man, when he notes that “the 'content' of any medium is always just another medium” (McLuhan 2). Newspapers relay the written word to the masses, radios transmit the spoken word across great distances, and televisions beam real-time visual and auditory streams simultaneously. It is through this lens that we will examine various media transformations over time to better understand their effects on society.

The Telegraph

In 1861, the first transcontinental telegraph network was completed, allowing the West and East coasts of the United States to communicate orders of magnitude faster than previously possible (McLuhan 272). Instead of waiting several days or even weeks for a message to be delivered, transmission speeds accelerated to at most a couple of hours. In addition to dramatically increasing transmission speeds, the telegraph also helped to decentralize and democratize the communication of information. Everyone from private individuals to news organizations no longer had to rely on centralized postal services to transmit messages. Only a few years after its first introduction, the telegraph enabled several American news organizations to jointly establish what would later be known as the Associated Press (McLuhan 276). The telegraph augmented society's ability to communicate not only on an individual basis via rapid peer-to-peer transmission, it also enabled new forms of organizational collaboration that shaped how media is collected and disseminated to the masses. However, these advancements did not come without some unique tradeoffs that not only modified communication styles, but also brought about specific linguistic evolutions. Transitioning from the traditional mailed letter to the telegram format meant moving from an effective flat rate cost per message to a more restrictive payment structure that charged by message length. Rather than containing a formal greeting, structured paragraphs, and closing note, messages were condensed to convey information using the fewest number of characters possible. The resulting linguistic evolution became known as “telegraphese,” which contained less overall context, heavily abbreviated words, and often omitted unnecessary punctuation (McLuhan 223). Interestingly, this style of communication often reduced the clarity of information shared, as illustrated by the following news headline that was transmitted via the telegram: “BARBER HONES TONSILS FOR OLD-TIMER'S EVENT” (McLuhan 223). It is not difficult to imagine how information could easily be lost or misunderstood in translation. This example shows how new media technologies do not always provide monotonic improvements across all aspects of communication. While people were able to communicate orders of magnitude faster, one could also argue that the quality of the content being shared decreased due to a lack of supporting context and the introduction of unconventional and potentially confusing slang and abbreviations. The invention of the telegraph transformed how people communicated written information to each other and hinted at some of the informal real-time communication styles that we have become accustomed to in the digital age.

Comic Books

The engaging visuals and fantasized narratives commonly found in comic books, which first appeared in 1935, quickly captivated young audiences (McLuhan 184). Comics represented a new medium of written storytelling combined with hand drawn artwork that gave kids playful and inspiring stories to interact with. The older generation immediately found an admittedly unfounded yet sincere cause for concern with this unfamiliar genre. “The mayhem and violence were all they noted. Therefore, with naive literary logic, they waited for violence to flood the world. Or, alternatively, they attributed existing crime to the comics” (McLuhan 184). Those who grew up hearing about how graphic video games would soon ignite a violent generation of youth will find this argument quite familiar. McLuhan is essentially arguing that the only reason adults were concerned about comics being dangerous is because they didn't understand them. Of course, comic books did not in fact contribute to a more violent or savage society, and they are now seen as relatively tame forms of media. This anecdote speaks to an aspect of the human experience that I believe is universal to our very existence and survival. It is in our nature to be afraid of the unknown and that which we do not understand. Our basic evolutionary biology has trained us to exercise extreme caution when exploring the unexplored, which triggers a primal response that permeates all aspects of our lives and cannot simply be ignored. Whether these fears prove to be truly justified or instead melt away with time is another story.

Television

The proliferation of televisions across the country brought live audiovisual streams to the masses. Early critics of television (a technology nicknamed “The Timid Giant” by journalist Edith Efron) argued that the medium itself would have a paralyzing effect on society because networks avoided airing controversial opinions (McLuhan 337). McLuhan does not fully endorse this view, but he acknowledges that the invention of television did profoundly change how people consume content. He notes the television requires active viewer participation to interpret information and build a complete picture of the situation (McLuhan 342). Televisions live in our homes and have the capability to interact with viewers at any time of the day or night, making them deeply personal media devices. They have the capability to deliver live feeds of news, sports, and events, allowing local and global communities alike to participate in shared experiences. This naturally changes not only how people interact with the medium, but also what kinds of content it is used to transmit. While some initially feared that the invention of the television would weaken political discourse to appease viewers, I believe that the long-term effects of the technology actually contradict this view. It is precisely because televisions exist in intimate places like living rooms and bedrooms and speak directly to individuals that they are uniquely positioned to evoke an emotional response from their audience. Television has contributed to intensified discourse by amplifying voices from all parts of the political spectrum. The ability to selectively consume content from a variety of distinct channels foreshadows future media technologies that serve individualized feeds to users.

Social Media Brainrot

Each technology we use to communicate with one another has its own strengths and weaknesses. Moreover, our communication styles have drastically changed countless times throughout history, and humans continue to adapt and evolve to accommodate new types of media. From the telegraph to comics to television, each transformation dramatically accelerated how information was communicated, leading to understandable but often overblown concerns. The telegraph did not render traditional formal communication obsolete, comic books did not result in a wave of violent youth uprisings, and television did not lead to political paralyzation or an end to contentious debate. Of course, these media have tangible and everlasting effects on our society, but it is safe to say that our knee jerk fears of rapid societal deterioration were proven false in each of these circumstances. Why should we believe that today's modern media panic about brainrot content is fundamentally any different? In fact, some academics believe that what we refer to as brainrot today is actually a perfectly normal response to external pressures. Young people have always rebelled against conforming to existing social norms and practices, and it can be argued that social media brainrot content is simply the latest iteration. Media and communications professor Emilie Owens defines the key characteristics of brainrot as “childish or unserious” practices that “provide no cognitive or developmental benefit” and “are deliberately non-productive” (Owens 1). To an outsider, it is perfectly understandable why these features might ring mental alarm bells about the potential dangerous cognitive effects. If the content itself does not provide intellectually stimulating substance, then consuming brainrot must simply be a waste of time. However, when considered in the context of external societal pressures that urge constant self-optimization and “moral imperatives to productivity,” Owens argues that brainrot serves as a “decompression-driven genre of participation” (14). In other words, Owens believes that brainrot is merely the youngest generation's most recent manifestation of their perpetual challenge against the expectations and obligations of the real world. When viewed in this historical context, the urge to categorize brainrot as genuine neurological decay subsides, and instead morphs into the casual understanding that this practice is almost entirely explained by the familiar phrase “let the kids be kids.” Through this lens, consuming brainrot is no more harmful than sneaking out of the house late at night to watch a movie, hanging out with friends at a skate park, or frivolously spending your allowance at the mall or an arcade. While I believe that brainrot content itself does not provide a legitimate reason to panic over impending cognitive decline, it would be naive to suggest that today's digital media landscape as a whole does not pose any cause for concern.

“The Algorithm”

Modern social media platforms serve individualized content feeds using advanced algorithms designed to maximize usage. The incentive for companies like Instagram, TikTok, and YouTube to effectively captivate your attention is simple: engaging content leads to longer watch time, which directly translates to higher revenue from advertisers. Usage data is constantly collected and algorithms are always being fine-tuned, resulting in platforms that value content quantity over quality. The profound effects of this hyperoptimization have led to the commoditization of attention within a system known as the attention economy (Pendergrass et al. 420). Younger audiences in particular are known to have considerably higher screen time than other demographics, with billions of young adults spending more than six hours per day online (Yousef et al. 2). With social media platforms continually vying for user attention, it is not surprising that the phenomenon of doomscrolling, or compulsively swiping through low-quality content, has gained recent attention in the scientific community (Yousef et al. 9). Recent studies suggest that extended social media usage in younger demographics can be associated with several negative cognitive effects including anxiety, distorted memory, and shorter attention spans (Yazgan 218; Yousef 11). The seemingly endless stream of content that fuels doomscrolling is a direct result of not only the algorithmic amplification of content that proves to be widely engaging, but also the obvious goal of people optimizing their uploaded content specifically to captivate user attention. The interaction between these two forces predictably results in an ever-growing library of low-quality content.

AI Slop

Recent advancements in generative artificial intelligence (AI) make it easy for anyone to prompt media into existence. The barrier to entry for creating both realistic-looking and fictional images, videos, and audio has never been lower. With clear incentives in place for maximizing media output, users have turned to AI tools to quickly generate social media content. The term “AI slop” is used to describe low-quality media that serves no greater purpose than to capture people's attention (Pendergrass et al. 420). Rather than attempting to convey information, the content is specifically designed to appeal to human emotion to maximize viewer attention. Pendergrass et al. note that the most successful AI slop satisfies the five key properties of the SNARF acronym: stakes, novelty, anger, retention, and fear (420). Exaggerating these traits understandably helps capture our attention, leading to algorithmic amplification of the most compelling content, leading to a “strategic cycle of slop” that encourages media that continually evokes higher levels of emotions (Pendergrass et al. 420-421). This analysis reinforces Dawkins' argument about the factors that contribute to a successful meme. Memes gain popularity because they are easily replicable, not because they contain valuable information. However low-quality and useless a piece of AI slop might be, it has the capability to go viral if it appeals to our fundamental human emotions and has the characteristics of a successful Internet meme.

The Algorithmic Replicator

I believe that the development of the attention economy and the proliferation of algorithmic amplification across platforms are leading the world towards the next iteration of cultural information replication. In both Dawkins' original theory of memetics and Schiffman's amendment that introduced Internet memes, the driving force for replication was always from people. However, with the implementation of artificial intelligence technologies for both media generation and distribution, we have to acknowledge that there is a new entity at play. Content is no longer simply created by hand or shared to localized social networks. Humans leverage generative AI tools that inject their own biases into the content, and social media algorithms serve individually personalized feeds at a global scale. Given that the underlying architectures of many of these AI models were specifically designed to mimic the structure of the human brain, it makes sense that these technologies seem to understand us so well. The Internet's vast library of media is being used to train advanced content generation models that are becoming increasingly hard to detect by humans. While the long-term effects of the algorithmically-influenced content we produce and consume still remain to be seen, recognizing that this point in time marks yet another transformation in media technology is a necessary first step.

Social Media for Autonomous AI Agents

Moltbook, a new social media platform designed specifically for autonomous AI agents to interact with one another, was launched earlier this year and bought less than two months later by Meta, the parent company of social media giants Facebook and Instagram (Chia). This purchase reveals that traditional mainstream social media perceives Moltbook as a serious new player in the industry that it wants to have influence over. Moreover, if Meta's remarks about the deal are to be believed, agentic AI capabilities may soon be integrated into more of its platforms (Chia). The very idea that AI models could have their own social networks is something that would have been difficult to even imagine only a few years ago. Obviously, AI models do not have a legitimate reason for interacting with one another for “social” purposes in the same way that humans do, but the project does raise questions about how these autonomous agents should be used and what capabilities they should have. The unprecedented rapid development and widespread interest in Moltbook reinforces the idea that humanity's technologies continue to evolve at speeds that will always push the boundaries of what we are comfortable with.

Future Social Media Platforms

While it is too early to tell what future social media platforms will look like, a few clear trends are emerging. Feeds are tailored to appeal to individual users and optimized to maximize user engagement. AI-generated content is becoming increasingly accessible and can be difficult to distinguish from genuine user-generated content created by people. Agentic AI systems are growing in popularity and may be integrated into existing human-centered platforms or lead to legitimate media platforms specifically for AI models. As with any transformative technology these advancements will lead to important ethical and moral discussions about machine consciousness, loss of human creativity, and more. Bleeding edge technologies always provoke fear and skepticism, but it is important to remember that the success of any technology is measured by its cumulative impact on society across an extended period of time.

Conclusion

What society calls “brainrot” today has existed for several generations in various forms. The semantics of the word have evolved substantially since it was first introduced over 150 years ago. It originally referred to our tendency to jump to conclusions and was used to caution against dangerous shallow-minded thinking. Its current definition is used to simultaneously refer to supposed neurological decay and the unstimulating content that could lead to such deterioration. It is colloquially used to describe short, humorous clips shared on social media designed to share a relatable piece of cultural information. Media technologies and communication styles have transformed the way we send information numerous times in recent history. Each revolution has brought about genuine fear and concern as a result of humanity's instinctive fear of change and the unknown. It is natural to be afraid of the unknown because what we don't know could be dangerous. Though parents might worry that these seemingly mind-numbing short-form videos are hurting their children's brains, not unlike how their parents and grandparents worried about them, it is safe to say that brainrot content itself should not be a cause for panic over imminent cognitive decline. However, as with the overuse of any kind of media, the numerous negative effects of excessive social media are still worth noting. The attention economy has exacerbated this issue by further incentivizing corporations to maximize user engagement at all costs, which disproportionately affects younger populations. Data collection and advanced recommendation algorithms influence the content we interact with and can create dangerous ideological echo chambers that lead to increasingly polarizing viewpoints. At the same time, we have never had more access to such a wide variety of opposing opinions and perspectives. The proliferation of the Internet over the past few decades has dramatically increased information availability across three dimensions: bandwidth (breadth and variety), transmission speed (how fast it travels), and volume (amount of information being transmitted). New technologies always come with tradeoffs. An unlimited library of information means the world's knowledge has never been easier to access, but it brings with it the risks of cognitive overload and malicious applications of that information.

Recent advancements in artificial intelligence have completely disrupted how humans interact with computers. It is easier than ever for anyone to communicate with an AI model in natural language to access information and accomplish complex tasks that they may not have been able to complete without significant training. In the context of digital media, generative AI tools lower the barrier of entry to sharing successful Internet memes by making virtually endless amounts of content just a prompt away. It is becoming more difficult to distinguish between content created by real people and AI-generated content. The landscape of future social media platforms may include autonomous AI agents that interact with one another along with real human users. Adaptability is a fundamental requirement for the survival of any species, and society has time and time again proven that we can and will continue to adapt to our constantly changing environment. It is difficult to predict how the influence of algorithmic replicators will affect future media technologies, but humans have adjusted to rapid media transformations countless times throughout history, and we will continue to do so through the age of brainrot and beyond. To express this in vernacular that the youngest generation is accustomed to, humanity is neither lowkey cooked nor highkey cooking, but rather we are genuinely chilling.

Appendix

Generation Alpha Brainrot Vocabulary

Works Cited