Article

Otto Sapiens, the Last Model Before Extinction

May 14, 2026 | 62 min | anthropology
Language
EN DE
Rauscher blog image

A diagnosis of the Stone Age brain under the screen, the voluntary surrender of thought, and the single moment where Otto Sapiens never wants to be

There is an old wall clock in my study whose pendulum makes every single second audible, and those of you who follow this blog and read my texts with the attention I bring to the writing of them will perhaps imagine that I often pause for a longer beat before the next sentence and that during this pause I do nothing other than listen to the clock. Tick. Pause. Tock. Pause. And in this one pause that lies between tick and tock, life lasts longer than in 2 hours of Instagram, because in this pause something actually takes place, namely the moment in which a human being is still alive, instead of existing as a sequence of stimuli that washes lifetime out of their hands like sand, without them noticing.

I begin with this wall clock because it is the exact opposite of what I will be writing about on the following pages, and because it is the acoustic proof that time exists, even when modern man has accustomed himself to ignoring it. I will be writing in this piece about a variant of Homo sapiens that I have been calling Otto sapiens for years, and I will not use this designation as a satirical ornament but as an anthropological category that I consider useful, because it describes an empirically describable end-form of modern man that was not newly created by smartphone, Instagram, TikTok and generative artificial intelligence, but rather poured by these tools into a shape that previously existed only as a tendency and has now become the dominant species in the consumer societies of the West. I will claim that this end-form replaces Homo sapiens, that it will not survive him, because it has lost the most important tool with which our species has so far survived, namely independent thought, and that it has not lost this tool through external violence but voluntarily given it up with its own hand, with enthusiasm even, because thinking made it afraid and fear is what Otto sapiens hates more than anything else in the world.

That is the thesis I will be unfolding on the following pages, and I will not assert it polemically, I will ground it scientifically, with studies from the past 2 years, with anthropological findings from paleoanthropology, with neuroscientific data on metacognition, and with historical patterns from 100 years of recurring societal crises. By the end I will have pinned down Otto sapiens precisely, like an insect specimen, and I will at the same time have shown that this piece I am writing now will not be read by precisely those people who are described in it, because it is longer than the seven-second span their attention has been trained on. That is the polemical pointe I anticipate, because it is the only one against which Otto sapiens has no defense. He cannot refute this piece by reading it, because he will not read it, and that fact alone confirms every claim it contains.

The mozzarella that does not fall from laughter

I am sitting at my regular table in a small pizzeria I have been visiting for many years, at a table from which I can see the other guests without becoming conspicuous in their field of vision, and the proprietor, a taciturn man with the face of a man who has seen too much, brings me as always a glass of water and my unchanging pizza, for which he no longer takes an order because he knows what I will eat before I have taken off my coat. We nod to one another, he leaves, and I begin the only activity I pursue in this place besides eating. I observe.

At the neighboring table a couple is sitting, perhaps in their early forties, both well dressed, both with the posture of people who burn enough energy in the hamster wheel between 8 and 18 to be incapable, afterwards in a restaurant, of holding a conversation that gets by without a screen. They have just said something that I could not make out, and he answers with a question that is evidently of a scientific nature because it contains the word why, followed by a biological phenomenon whose precise formulation escapes me. She looks at him inquiringly. He reaches into the inside pocket of his jacket, pulls out a smartphone, types something into it, waits 3 seconds, then reads her an answer that obviously comes from ChatGPT because it has that peculiar tone that suggests completeness without actually having explained anything. He reads with emphasis, as though he had just thought up this answer himself. She nods. He nods. Both have now become more intelligent, or so they believe, which for the process I am observing is entirely identical.

I take a bite of my pizza, and the mozzarella, which in my mouth had actually been prepared for a joyful reception, falls out of it back onto the plate, because in this very moment my chewing muscles refuse the command necessary for closing the mouth. This happens to me sometimes in this pizzeria, and not because something funny has befallen me, but because in such moments I experience a very sober sensation that translates best with the word shock. I am shocked when I see that the threshold below which a human being still thinks has just sunk further before my eyes. The two at the neighboring table have just documented that the necessity of an independent thought no longer exists for them. They asked a question, they received an answer, and they accepted that answer as true without testing it, without comparing it with their own experience, without even attempting to find out whether the answer is correct at all. That is not laziness, that is a neurological capitulation, and I will show on the following pages why it is not an exception but the rule that will bring our species to extinction in the next 2 decades.

The proprietor passes by, sees the orphaned clump of mozzarella on my plate, raises one eyebrow, says nothing and walks on. Over the years we have developed a form of communication that gets by entirely without words, which would probably send Otto sapiens into a panic because silence is unbearable for him. Silence forces him, you see, to dwell in that single moment he is determined at all costs to avoid, and to that moment I will return in detail.

A childhood without stress

I was born in the year 1970, and that has the consequence that I belong to an ever-shrinking group of people who have still experienced a world before the internet, with everything that went with it, and what went with it in my case was also a childhood in a poor family, in which the self-evident comforts of the bourgeois middle class were absent. We did not have a grey rotary-dial telephone in the hallway, as the wealthier families on my street did. Whoever wanted to make a call walked one kilometer to the nearest telephone booth, and in this booth one inserted two ten-pfennig coins to establish the connection, and then spoke as quickly as possible whatever had to be said, because money in telephone booths disappeared faster than a six-year-old could count. I did not do this often myself, because my mother handled the calls and I stood with her in the booth, which always smelled faintly of stale tobacco and old coins. But I remember this booth so precisely as though it were still there, and I remember something that hardly exists anymore today, namely the feeling that a telephone call was a small logistical operation, not a reflex.

Bad news was rare in this world. When it did come, it arrived with the postman, who handed it over politely, often in the same breath as a postcard from the aunt in Hamburg, because bad news in those days did not claim the daylight for itself alone. It came, it was read, it was processed, and the next day life continued, because the next news only arrived the day after tomorrow. The world was not global. The United States of America did not interest me as a six-year-old because they were far away, and far away in those days meant actually far away, not a push notification on a screen. Homo sapiens of that time had nothing that constantly weighed on him, because he did not live in a world that informed him every second that somewhere, just now, somebody is dying. It was, and I say this in retrospect with the full clarity of a man who has since experienced much, the most beautiful and quiet time of my life.

The internet did not enter my life as a smartphone glow, that came much later, but as a grey, clunky device into whose hollows one pressed the receiver of a telephone, a so-called acoustic coupler that translated the tones the modem emitted into the telephone network. I connected to a Bulletin Board System, a BBS, and communicated through characters that appeared line by line on a green monitor with people I had never seen. That was roughly in the second half of the eighties, I was a teenager, and it felt like an extraordinary expansion of the world. I could write with someone in Hamburg without sending a letter. I could exchange software. I could conduct discussions. It was fascinating, and I did not notice at the time that the fascinating thing marked the beginning of the end.

For what began then, in that grey acoustic coupler that still looked like an industrial accessory, was the step-by-step outsourcing of human communication from physical space into a data stream. First it was text on a green screen, then it was images, then it was videos, then it was a permanent data stream that never broke off and that filled every corner of the day in which silence had previously been. I lived through this path, from the first minute to the present one, and I can describe with a precision that younger people often lack where the point was at which the tool became the burden. It was roughly where the device began to be carried in the trouser pocket instead of standing on the desk. As soon as the screen is in the pocket, the human being is no longer offline, and as soon as the human being is no longer offline, he is no longer in the here and now either, and as soon as the human being is no longer in the here and now, he begins to die without noticing.

What I have learned from this biography is not nostalgia, I gladly leave that to people who have nothing of their own to point to apart from the memory of a supposedly better time. What I have learned is a quite practical consequence, and it goes like this. I do not have to be reachable. Whoever calls me today on my mobile number does not reach me but Tyra, an artificial intelligence I programmed myself, who very politely explains that the addressee of the call cannot be spoken to directly and who recommends to every caller that they send an email or a Telegram message. That is enough. Nobody needs direct telephone access to another human being apart from closest family members in emergencies, and for that there is a different channel. I do not watch television. I do not read Bild-Zeitung, I occasionally analyze it, which is a qualitatively rather different procedure, and I will return to that procedure later. I have reclaimed my time, and the consequence of this reclamation is that I can sit on the sofa today with my Malinois Bandit, a bowl of popcorn on the side table, and observe the daily theater of Otto sapiens society from the front row, with a mixture of amusement and sober concern, because I do not know where this madness ends.

300,000 years of hardware without an update

Before I explain why Otto sapiens questions nothing, why he is afraid of the here and now, why he lets a machine tell him what truth is, I must briefly lay the anthropological groundwork without which everything else would sound like cultural-critical assertion rather than biological diagnosis. And the anthropological groundwork is surprisingly simple, it fits into a single sentence so weighty that one should read it twice. The human brain with which we move today through a world of smartphones, algorithms and generative artificial intelligence is exactly the same brain with which our ancestors hunted mammoths in the Ice Age 35,000 years ago.

This is not a literary exaggeration. The most precise paleoanthropological data on the subject come from a study by Neubauer, Hublin and Gunz, published in 2018 in Science Advances under the title The evolution of modern human brain shape, in which the authors showed on the basis of cranial fossils that the brain size of early Homo sapiens already lay within the range of present-day humans 300,000 years ago, but that the globular form of the brain, that is, the round, compact architecture we associate with modern humans, developed gradually between 100,000 and 35,000 years ago (Neubauer, S., Hublin, J. J., & Gunz, P., 2018, The evolution of modern human brain shape, Science Advances, 4(1), eaao5961). Since then this architecture has not changed. We move with the same neurological hardware through the world with which our great-grandparents from the Late Stone Age left their cave paintings on the walls of Lascaux.

This hardware was optimized for the detection of concrete threats, for the reading of social groups of approximately 50 to 150 members, as the anthropologist Robin Dunbar showed in his classical work on social cognition (Dunbar, R. I. M., 1992, Neocortex size as a constraint on group size in primates, Journal of Human Evolution, 22(6), 469-493), and for the management of comprehensible environments in which stimulus density was limited to what the immediate surroundings offered. This hardware is excellent at what it was built for. It can identify a snake in tall grass within 200 milliseconds, it can recognize a face out of a crowd of a hundred people, it can grasp a social configuration intuitively, and it can coordinate complex motor actions like the drawing of a bow or the throwing of a spear with impressive precision. But it is not built for what we have been asking of it for some 15 years, namely the parallel processing of thousands of stimuli per hour, the constant switching between contexts, the emotional response to events taking place ten thousand kilometers away, and the cognitive processing of information whose truth content it cannot verify through direct sensory experience.

The neurologist Richard Cytowic, of whom Oliver Sacks once said that he had changed the way we think about the human brain, captured this phenomenon in his 2025 book Your Stone Age Brain in the Screen Age (MIT Press) in a formulation that elegantly hits the matter. Our brains, Cytowic writes, are programmed for the needs of a prehistoric world, and they are therefore so poorly equipped to resist the incursions of Big Tech corporations into our attention system because they were shaped evolutionarily in a completely different stimulus environment (Cytowic, R. E., 2025, Your Stone Age Brain in the Screen Age, MIT Press). The American evolutionary psychologist Glenn Geher of the State University of New York at New Paltz has described the same phenomenon as evolutionary mismatch, a misalignment between the hardware with which we are born and the environmental conditions in which we live. Our brains are wired for certain conditions, says Geher, but our surroundings no longer correspond to those conditions.

When one makes oneself aware of the scope of this statement, much of what we are currently experiencing as social crisis becomes suddenly explicable. The increase in anxiety disorders is not a psychiatric failure of a generation, it is the natural reaction of a Stone Age brain to a stimulus density it cannot process. The decline in attention span is not a moral weakness, it is the inevitable consequence of the dopaminergic reward system being retrained to a stimulus frequency that does not occur in any environment in which this system evolved. The growing polarization of social discourse is not a moral aberration, it is what happens when a tribal brain built for groups of 150 people is placed into a virtual world of one hundred million voices and attempts to establish an identity through differentiation in this world.

None of this is fate, all of this is diagnosis, and the diagnosis runs that we live with outdated hardware in an environment for which this hardware was not developed, and that the consequences of this discrepancy are becoming visible not at the level of individual symptoms but at the level of the entire species. What we are currently calling a social crisis is in truth a biological crisis. Homo sapiens is currently overwhelming himself.

The Neanderthal in us

But there is a second factor that additionally complicates this hardware question, and that is the contribution of another human being who died out 40,000 years ago and who has left detectable traces in our genome to this day. The non-African population of the earth carries on average between 1 and 2 percent Neanderthal DNA, in some European and Asian populations as much as 5 percent (MedlinePlus Genetics, National Library of Medicine, 2024, What does it mean to have Neanderthal or Denisovan DNA, Bethesda, Maryland). This fact has been established since the publication of the Neanderthal genome by Svante Pääbo and his team in 2010 (Green, R. E. et al., 2010, A draft sequence of the Neandertal genome, Science, 328(5979), 710-722), and it has produced over the past 15 years a series of findings that are directly relevant to our present discussion.

A study from 2016 led by Corinne Simonti at Vanderbilt University in Nashville was the first to systematically investigate the clinical consequences of Neanderthal DNA in modern Europeans by linking the genomes of 28,000 adult patients of the Vanderbilt clinic with their electronic medical records (Simonti, C. N. et al., 2016, The phenotypic legacy of admixture between modern humans and Neandertals, Science, 351(6274), 737-741). The results were remarkable. Certain Neanderthal gene variants increase the risk of skin cancer, others influence the risk of nicotine addiction, still others are connected with depression and neurological irregularities, some positively, some negatively. A surprisingly high number of Neanderthal DNA fragments are associated with psychiatric and neurological effects. The doctoral student Corinne Simonti formulated it soberly at the time. The brain is incredibly complex, she said, and it is therefore not surprising that changes from a different evolutionary path could have negative consequences.

A second study, likewise published in 2017, showed on the basis of magnetic resonance imaging investigations of 221 healthy adults of European descent that the proportion of Neanderthal DNA has measurable effects on skull and brain shape (Gregory, M. D. et al., 2017, Neanderthal-Derived Genetic Variation Shapes Modern Human Cranium and Brain, Scientific Reports, 7, 6308). The average Neander score in this study population lay at 5.4 percent, with a range between 3.9 and 6.5 percent. In other words, the non-African population carries a measurable, anatomically detectable contribution of a relative who died out 40,000 years ago in its own skull shape and brain structure.

What does this mean for our discussion of Otto sapiens? It means that we do not simply have a Stone Age brain in a modern world, we have a Stone Age brain with additional archaic admixture in a modern world, and this admixture influences addictive behavior, psychiatric susceptibility, brain morphology. The non-African population, that is, including the Central European one in which I live, carries a genetic mortgage that makes it more susceptible to certain modern clinical pictures, and it does so with a brain that was anyway not built for the world in which it is now supposed to function. That is the anthropological double burden whose consequences I will show on the next pages, namely how Otto sapiens responds to it. He does not respond with adaptation, because evolution is too slow and his hardware too fixed. He does not respond with cultural self-reflection either, because for that he would have to have trained metacognition. He responds with flight, and his flight is called smartphone, Instagram, TikTok and ChatGPT.

The hardware does not match the software

If one sums up the diagnosis of the past two sections, one has a statement that can be made tangible in a single image. Imagine a computer from the year 1990, a heavy, beige case with a processor that ticks at 33 megahertz, with 4 megabytes of working memory, with a hard drive that can store approximately 80 megabytes of data. Imagine now that you would try to run a modern application for generative artificial intelligence on this device, a model with 70 billion parameters that pushes several gigabytes of data through with every inference. What would happen? The device would freeze, crash, and in the worst case the processor would burn through because it cannot bear the load.

This is exactly the situation in which the human brain has found itself for about 15 years now. The hardware has been unchanged for 35,000 years, the software in which this hardware is being executed has been accelerating exponentially, and the brain reacts to this discrepancy with symptoms registered daily in every clinic in the Western world. Burnout, anxiety disorders, depression, attention deficit and hyperactivity in adults, sleep disorders, chronic fatigue. These symptoms are often treated in clinical practice as psychological illnesses, with medication, with psychotherapy, with the recommendation to do more sport, drink less coffee, practice mindfulness. All of that helps in individual cases, but it treats the symptom, not the cause. The cause is a biological one, and it is that this hardware was not built for this software.

There are people who argue that the brain is plastic enough to adapt, and they point to neuroscientific findings showing that the human brain can produce impressive adaptive feats even in adulthood. That is true for certain functions. Whoever learns to play the violin develops a larger representation of the left hand in the somatosensory cortex. Whoever becomes a London taxi driver and passes The Knowledge has a measurably enlarged posterior hippocampus, as Eleanor Maguire has shown in her classical studies (Maguire, E. A. et al., 2000, Navigation-related structural change in the hippocampi of taxi drivers, Proceedings of the National Academy of Sciences, 97(8), 4398-4403). But this plasticity works only within the biological core architecture, it cannot overcome this core architecture. A brain that was built for a stimulus density of perhaps a hundred relevant inputs per day cannot be rebuilt by plasticity into a brain that processes a stimulus density of ten thousand stimuli per day. Plasticity is a fine adjustment, not a redesign.

What actually happens when one exposes the brain to this overload is a kind of cognitive wear, an attrition process that is not reversible, at any rate not without radical change of the environmental conditions. I will demonstrate that empirically in the sections on smartphone research. Before that, however, comes the central philosophical question that lies before this entire diagnosis, and that nobody poses because it is uncomfortable. The question is where the human being lives in his own time, and it is the question that decides all others.

Yesterday is history, tomorrow is a mystery

There is a play on words in English so old that I can no longer attribute it unambiguously to a source. Yesterday is history, tomorrow is a mystery, but today is a gift, that is why it is called the present. The play with the two meanings of present, namely the moment and the gift, does not translate directly into German, but the idea behind it is universal and old. It appears in Stoic philosophy, it is found in Buddhism, it is ascribed to Eleanor Roosevelt, it became broadly known through the Kung Fu Panda films, and it is despite its banality the truest statement about the human spirit that one can formulate in a single sentence. Yesterday is history, tomorrow is a mystery, today is a gift, that is why it is called the present.

When I listen to the wall clock in my study, I do exactly what this sentence describes. I am in today, I am in now, I am in the moment in which a tick and a tock are the only temporal structure reaching me. This moment is everything I actually have. Everything else is memory or speculation, and memory and speculation are both not real states, they are mental constructions we build up because the human brain has specialized in constructing time as a cognitive dimension instead of merely perceiving it as immediate sensory experience. That is a magnificent evolutionary achievement that distinguishes us from the animal. But it has a price, and the price is that we have the ability to project ourselves into the past and the future, and that we can use this ability so intensely that we lose the only place where we actually exist.

Otto sapiens almost never lives in the here and now. He lives in the future, because the future is the place where fear lives, and the further he thinks into the future, the more space fear has to spread. He worries about the pension he will draw in 30 years, about the climate catastrophe he will only really feel in 40 years, about the artificial intelligence that will only replace him in 10 years, about the next pandemic of which nobody knows whether it will come, and about the next political election whose outcome will compel him to leave the country or save democracy or both at once. These worries are not entirely unfounded, but they share one quality. They all play out in a time that does not yet exist. They are pure constructions he invents in the present in order to install a threat situation for himself with which he can do something. For a threat situation in the future has an advantage over the present, it is not real, and as long as it is not real, he does not have to do anything. He merely has to be afraid, and being afraid is exhausting, but it is less exhausting than acting.

When he can no longer bear the fear of the future, he switches into the other mode, the mode of the past, and in the past he finds almost exclusively pain. He remembers the humiliation in school 30 years ago, the disappointed love 20 years ago, the quarrel with his father 10 years ago, the pandemic 5 years ago, the rude shop assistant a week ago. He gathers these memories like exhibits in a court case in which he presents himself as the victim, and the more exhibits he gathers, the clearer it becomes to him that he has a right to feel bad today because he was treated badly yesterday. This too has a psychological advantage over the here and now. He does not have to be present in the current moment, and he does not have to take responsibility in the current moment, because he is still occupied with what was done to him. Pain is a wonderful pretext for not living.

What Otto sapiens does not do, almost never, is be in the here and now. For the here and now has a property that no other temporal form has. It is real. It is verifiable. It cannot be altered through construction. When I sit in this moment in the pizzeria with a mozzarella in my mouth, there is no way to interpret this fact, to soften it, to mourn it or to fear it. It simply is. And precisely this simplicity is what Otto sapiens cannot endure. In the here and now he would have to meet himself, without the protective layer of past and future that he has built around himself. He would have to feel without interpreting. He would have to be without becoming or having been. And that is the one thing he will not do at any price.

He has therefore developed something that is without precedent in human history. He himself, with his own thoughts, sees to it that he feels bad. He no longer needs an external threat to feel wretched, he produces the wretchedness in his own workshop, with his own cognitive resources, with impressive industrial efficiency. His smartphone helps him do that, his Instagram feed helps him do that, ChatGPT helps him do that, but these tools are only amplifiers. The drive comes from his own interior, and he has outsourced the most important task of his brain, namely functioning in the here and now, to a machinery that keeps him permanently out of the here and now.

It would actually be so simple for him to change this. He would only have to learn again to live in the current moment. He would have to hear the wall clock instead of checking the smartphone. He would have to taste the mozzarella instead of composing the next photograph for Instagram. He would have to endure the silence between two sentences instead of filling it with the next reel. It would be so simple. And it is at the same time the hardest thing he could do, because it would confront him with what he has been trying to avoid the whole time. Himself.

Metacognition is what you do not have

There is a cognitive faculty that has increasingly moved to the center of the discussion of human intelligence in the research of the last 30 years, and that explains almost everything I have described in the preceding sections. This faculty is called metacognition, and it means, put simply, thinking about one’s own thinking. Metacognition is the ability to step back from one’s thinking, to observe one’s own mental operation, to judge whether it is goal-directed, whether the premises are sound, whether there is perhaps a blind spot distorting the result. It is what makes us, instead of reactive stimulus processors, reflective subjects. It is the precondition for our being able to think not only intelligently but also to know whether our thinking is intelligent in a given case at all.

Stephen Fleming, a neuroscientist at University College London who counts among the leading researchers on this subject, has shown in his work on metacognition that this faculty is not identical with IQ, as many people would intuitively suspect. It is an independent cognitive variable. One can have a high IQ and poor metacognition, which means that one can solve complex problems quickly but not notice at all when one is wrong on a concrete problem. Conversely, one can have a moderate IQ and excellent metacognition, which means that one is slower on problems but feels very precisely when one is reaching one’s own limit and needs help. A review published in Frontiers sums it up succinctly by describing metacognition as the bridge between cognitive abilities and actually intelligent behavior (Norman, E. et al., 2019, Metacognition in psychology, Review of General Psychology, 23(4), 403-424).

Even clearer about the importance of metacognition is an investigation that Heather Butler and colleagues published in 2017 in Thinking Skills and Creativity. The researchers investigated in a sample of 244 adults whether critical thinking or intelligence is the better predictor of real-life events (Butler, H. A., Pentoney, C., & Bong, M. P., 2017, Predicting real-world outcomes, Thinking Skills and Creativity, 25, 38-46). The participants completed a test of critical thinking ability, an intelligence test, and an inventory of real-life events, from insolvencies through unwanted pregnancies to professional failures. The result was unambiguous. People with higher scores on critical thinking had experienced significantly fewer negative life events than people with lower scores. Critical thinking ability, which counts as a form of metacognition, predicted negative life events better than IQ and added an independent contribution to the variance explained by IQ. In other words, a high IQ alone does not protect one from bad decisions. Metacognition does.

And now to the decisive pointe I will formulate here without further ado, because it carries the entire argument of this piece. Metacognition is not innate. One is not born with it, at any rate not in developed form. It is a skill that must be trained, and it is trained through exactly those activities Otto sapiens systematically avoids. It is trained through quiet reading, through the writing of one’s own texts, through the patient solving of problems that are not solvable in a single sitting, through the enduring of uncertainty, through the confrontation with one’s own incompetence, through quiet self-observation in the here and now. It is not trained through scrolling on TikTok, through the quick consumption of reels, through the querying of ChatGPT, through reading Bild headlines or through the nervous checking of push notifications.

From this follows a closed trap in which Otto sapiens sits without being able to free himself, because he does not recognize the keys to liberation either, since they are metacognitive keys which he does not have. He is overwhelmed by a world that his hardware cannot process. He does not have the tool that would make this overwhelming visible to him in the first place, namely metacognition. He cannot develop the tool because his way of life systematically destroys the conditions under which it would arise. And he cannot change his way of life because he does not know that it is the problem, since for that he would again need metacognition. It is a perfect circle, and Otto sapiens sits right in the middle of it.

Whoever once falls into this spiral does not get out of it under his own power, at any rate not without a massive jolt that compels him to halt. Such jolts happen sometimes, in the form of severe illnesses, in the form of bereavements, in the form of professional catastrophes, and sometimes people actually use these jolts to break the circle. But that is rare. More often Otto sapiens flees, after such a jolt, all the more into his tools because they offer him solace, which however is only simulated solace, a band-aid on a wound that in truth would need surgery. His smartphone is his band-aid. Instagram is his band-aid. ChatGPT is his band-aid. And Otto sapiens applies these band-aids to himself in the belief that they will heal him, while in truth they only obscure the view of the actual wound so that he does not have to treat himself.

The telephone that also thinks in the pocket

When I wrote in the autobiographical section earlier that the internet began for me with the grey acoustic coupler, I did not mean that romantically but technically. Back then the device was in a fixed place, on the desk, tethered to the wall by a cable. When one did not want to use it, it was not there. This physical separation between human and device was the last protective barrier that preserved the human brain from stimulus overload, and it has disappeared step by step over the past 20 years. The smartphone eliminated it definitively, because the smartphone is always present, in the pocket, in bed, at the dining table, in the toilet. There is no moment in the day of an Otto sapiens in which the device is not within his immediate reach.

What this does to the human brain has been the subject of empirical research for years, and the findings are unambiguous enough that no doubt should actually be possible anymore. A study by Adrian Ward and colleagues from 2017, published in the Journal of the Association for Consumer Research, showed that the mere presence of a smartphone within sight measurably reduces cognitive capacity, even when the device is switched off and nobody touches it (Ward, A. F. et al., 2017, Brain drain, Journal of the Association for Consumer Research, 2(2), 140-154). The participants who had left their phone in the next room performed significantly better in an attention test than those who had the phone facing down on the desk. The researchers explain the effect by saying that the brain has to expend cognitive resources continuously on not attending to the phone, and that this very process of restraint consumes capacity that is then missing for the actual task. It is an elegant pointe that makes one wonder why not every human being consistently puts the smartphone out of view. The answer is that most people do not know they are currently losing it, because they cannot measure it, because they have not trained metacognition, because their capacity is being reduced by the phone.

Even weightier is a study from February 2025, published in PNAS Nexus, on which I will dwell in some detail because it draws probably the clearest empirical picture of what smartphone use does to the human brain. Noah Castelo, Kostadin Kushlev, Adrian Ward, Michael Esterman and Peter Reiner investigated in a large randomized experiment what happens when one blocks the mobile internet on participants’ smartphones for 2 weeks (Castelo, N. et al., 2025, Blocking mobile internet on smartphones improves sustained attention, mental health, and subjective well-being, PNAS Nexus, 4(2), pgaf017). The participants could continue to use their phone, but all data-based applications were blocked, so they could make calls and write text messages but not open Instagram, not TikTok, not WhatsApp with image transmission, not a mail app. The researchers then tested the participants after the 2 weeks in a series of measures and compared them with a control group that had continued to use the phone normally.

The results are so clear that one should read them twice. The participants from whom mobile internet had been withheld for 2 weeks showed a measurable improvement in mental health that was greater than the improvement typically achieved with antidepressants. They showed an improvement in subjective life satisfaction. And they showed an improvement in sustained attention that corresponded in absolute numbers to a rejuvenation of 10 years of life. That is not a marketing claim, that is the result of a randomized controlled trial published in a sister publication of PNAS, one of the leading scientific journals in the world. Two weeks without mobile internet make a 40-year-old human being cognitively into a 30-year-old. The antidepressive effect exceeds that of chemical antidepressants.

If this study did not come from a research laboratory in the United States but, say, from a pharmaceutical trial for a new medication, then the press of the past weeks would be full of it, and the substance would be on its way to approval. But it is not a medication, it is a behavioral change, and behavioral changes do not bring a corporation any money, which is why less is reported about them. Otto sapiens will therefore not read this study, because it does not appear on any of his usual channels, and if he does read it, he will dismiss it as alarmist exaggeration because it would require him to change something, and changing something means effort, and effort means fear, and fear is what Otto sapiens hates.

I drew these conclusions from my own life long before the study appeared, because I learned from my forensic work to distinguish stimuli from substance. I am not reachable in the permanent sense. Whoever calls my mobile number reaches Tyra, a small artificial intelligence I wrote myself and that has functioned for years as a digital antechamber. Tyra is friendly but firm. She takes down the caller’s name, asks about the matter, explains that the addressee of the call cannot be spoken to directly, and recommends sending an email or a Telegram message. That is enough for everything that is actually important. Whoever really needs me writes to me, and whoever just wants to talk because he happens to be bored is elegantly filtered out by Tyra without my even noticing. This arrangement has improved my quality of life to a degree I could hardly believe in the first months, because suddenly I had hours in which I was actually present, rather than oscillating between stimuli. I could write without being interrupted. I could sit with Loui at the dining table without a screen lighting up. I could go for a walk with Bandit without a vibration filling my trouser pocket. It was, in the narrow sense of the word, like the return of my own time.

Reels, reels, reels

If the smartphone is the tumor, then Instagram, TikTok and similar platforms are the metastases it casts off, because they are based on a business model that maximizes only one single variable, namely the time the user spends on the platform. Everything that happens on these platforms is optimized for this one variable. The algorithms that decide which video appears next are not neutral recommendation systems, they are trained intensification machines that learn in every millisecond what keeps the user on the platform longer and then offer him exactly that. The result is a perfect behavior modification, comparable to the classical conditioning experiments of Skinner, with the difference that the pigeons in Skinner’s cages knew they were being conditioned, while Otto sapiens scrolling through his reels feed does not know that.

The research situation on the cognitive consequences is now so extensive that I can mention here only the most marked findings. A meta-analysis referenced in 2025 in Human Behavior and Emerging Technologies, involving nearly 100,000 participants, showed that intensive users of short-form video platforms exhibit lower scores in attention, inhibitory control and working memory, that is, in precisely those cognitive abilities necessary for demanding thought, sustained concentration and self-regulation. That is not a small sample, that is an extrapolation over an order of magnitude that allows statistically robust statements. Pediatricians have described TikTok as a dopamine machine, because every new video appearing on the screen triggers a small dopaminergic surge, and the expectation of the next surge trains the reward system to perceive ever shorter intervals as satisfying. The result is a step-by-step shortening of the attention span until a point is reached at which everything lasting longer than 15 seconds is already perceived as tiring.

The phenomenon has even found its way into general language. The Word of the Year 2024 in the Oxford English Dictionary was Brain Rot, and the choice was not made by bloggers but by a panel of linguists who determined that the term had been used so frequently over the past 12 months that it justified inclusion. A language that develops a new word for cognitive damage through digital consumption is a language that has registered a problem. Otto sapiens does not take cognizance of this registration because he is currently scrolling.

I once observed Loui, a few months ago, in a bookshop where we were both actually looking for specialist literature. She had disappeared into a quiet corner and was reading a book standing, without looking up, for a quarter of an hour. It was a scholarly work on veterinary pathology, with a density of information that even for specialists requires a focused reading posture. While she was reading, a young man passed by the shelf next to her, perhaps in his early thirties, who was holding a book that someone had apparently just shown him and who was attempting to read this book in a manner that struck me. He opened it, read half a page, then pulled out his smartphone, scrolled for 8 seconds through something, then read the next half page, then scrolled again. He performed this switch several times before finally laying the book aside and remaining only on the phone. He was not bored, he could not. His brain had become accustomed to the seven-second span of the reels, and a connected text demanding concentration over several minutes was no longer processable for this brain. Loui eventually looked up, saw the young man, saw me, and we exchanged the glance we sometimes exchange when something is at the edge of consciousness that both think simultaneously. It was the same glance I exchange with the silent proprietor in the pizzeria. A glance that says, something has just taken place that diagnoses an entire generation without anyone naming it.

Bild, the queen of hooks

At this point I would like to briefly change the stage and speak about a German media phenomenon I observe out of purely scientific interest, because it represents perhaps the most elegant example of applied behavioral economics available in the German language. I do not read Bild-Zeitung, because I do not read Bild-Zeitung, that I had mentioned, but I analyze it occasionally, because it constitutes a remarkable object of study for the psychological manipulation of attention. I open its website, scroll through the headlines, read not a single article but observe the methodology. It is fascinating.

Bild-Zeitung works, like all successful clickbait media, with a psychological method described by the American behavioral economist George Loewenstein at Carnegie Mellon University in 1994 in a classical paper that remains one of the most cited texts in behavioral economics. Loewenstein called it the Information-Gap Theory of Curiosity (Loewenstein, G., 1994, The psychology of curiosity, Psychological Bulletin, 116(1), 75-98). The idea is astonishingly simple. When a human being perceives that there is a gap between what he knows and what he could know, he experiences a psychological discomfort that drives him to close this gap. This gap is the basis of every successful headline, every cliffhanger, every advertising trick of the past 100 years. They tell you just enough that you know something important is missing, but not so much that your curiosity would be satisfied.

Bild-Zeitung masters this method to perfection. The headlines are constructed in such a way as to place the reader in a state of information gap from which he can hardly escape. Sentences like What is behind this fan photo or It was an absolute state of emergency are not information, they are door handles on doors the reader must open in order to see the information behind them. Behind the door, however, what awaits him is not the information but the invitation to subscribe to Bild-Plus, because the article can only be read with a subscription. Ingeniously simple, as the German watchblog Bildblog formulated years ago in an analysis. The methodology has worked for years, and it continues to work, although many readers have meanwhile understood the game. They click anyway, because the Stone Age brain cannot endure the information gap.

Whoever has taken out the subscription receives not only the texts, he is fed in the following weeks with further headlines luring him to further content, which in turn contains headlines luring him to yet further content, and at regular intervals he is offered products that forward him to Amazon. There he orders something, because the Stone Age brain, unable to question because it is overloaded, because it has not trained metacognition, because it is afraid of the here and now, processes the order as a small dopaminergic reward event. Three days later the parcel arrives. It is unpacked, the product is used, sometimes for 5 minutes, sometimes longer, and then it lies in the cupboard, where it remains until the next donation to a charity. That is how it works. I have observed this chain over the past years in many acquaintances I know well and who are not stupid, but who have no metacognition and therefore cannot see through the game. It is always the same sequence. Headline, click, subscription, headline, click, Amazon link, order, parcel, forgetting.

And I can explain at any time, when someone asks, how this works precisely, because I can look at it without emotional involvement. I am not indignant about Bild, that would be a waste of life-time. I observe it the way a pathologist observes an interesting specimen. It is an outstanding object of study for the question of how to make money from the evolutionary weaknesses of the human brain, and it does this so professionally that one can almost have respect for the consistency with which the business model is carried through. That millions of people lose a portion of their cognitive resources, their attention and their money every day in the process is not Bild’s problem. That is Otto sapiens’s problem, who has not noticed that he is being milked.

Experts who are no experts

There is a further phenomenon that has caught my attention more and more often in recent years and that is closely related to the preceding section, because it likewise rests on the evolutionary weakness of the Stone Age brain, namely on the built-in deference to authority. The human brain is programmed to believe people of high position in the social hierarchy, because in a tribal society with manageable groups this was evolutionarily advantageous. When the elder of the tribe said that the berries on the red bush are poisonous, then it was survival-conducive to believe him rather than try them oneself. In the media present the same neurological circuitry functions, but it is directed at a completely different world, in which the authorities are no longer selected by experience and competence but by television suitability, by speaking pace, by self-presentation and by availability for camera appointments.

The result is the experts who are no experts. They sit in the news studios and explain to viewers what they should think, and they do this with a self-evidence that takes any skeptic’s breath away. A man who has never served in the military, never been a single day in a war zone, never written a single military-scientific work, explains in the evening news the strategic situation in the Ukraine war, with the breast tone of a conviction of a man who has seen through the matter. A woman who has never published an epidemiological study, never accompanied a pandemic, never treated a patient with a respiratory infection, explains in the morning magazine the optimal protective strategy against the next virus currently being described in the media. An economic journalist who has never read a balance sheet from the first to the last line explains in a business magazine why the next recession is unavoidable and which stocks one should now buy.

These experts are not all incompetent in their original field. Some of them are quite educated, some have university degrees, some have experience in some field. The problem is not their education, the problem is the unhinging of the competence concept through the media logic. In the media, expert is not he who has known a field for decades from the inside but he who can answer quickly, looks television-suitable and is available for the next appointment. That is an entirely different selection logic than the scientific one, and it produces a personnel that has nothing to do anymore with the classical concept of the expert. It produces the talking heads we all know, and it produces them in industrial mass because the business model of news television rests on the constant availability of voices having something to say, regardless of whether they actually know anything.

Otto sapiens cannot distinguish these experts from real experts, because the diagnostic tool he would need is missing, and we have already named it elsewhere, metacognition. He would have to be able to ask himself how this man actually knows what he is claiming, and whether the logic of his argument is internally coherent, and whether there are countervoices showing similar qualifications and arriving at different conclusions. All of these would be metacognitive operations that Otto sapiens does not carry out because he does not know how. He thus takes the expert’s statement as truth because it was spoken in a news studio, because the person looks serious, because the tone conveys self-assurance, and because the statement can be lodged in his head in 90 seconds without him having to stop and think.

This is the media dictatorship in which we live, and it is not the dictatorship of a political party or an ideological movement but that of speed. Whoever speaks quickly is right. Whoever thinks slowly has lost. Whoever holds a differentiated position is boring. Whoever holds a sharpened position is invited back. The media logic selects for simplicity, for sharpness, for rhetorical force, and it systematically eliminates those voices that would actually have something to say, because these voices typically speak more slowly, qualify more, build in more reservations and are thereby television-unsuited. The result is a public discussion culture that has almost nothing more to do with real expertise and that permanently supplies Otto sapiens with pseudo-information that he files away as knowledge without noticing that he has just attended a theatrical performance.

After Corona comes Hanta, and Iran is suddenly gone

Once one has understood the mechanisms of the preceding section, one notices a pattern that one cannot afterwards shake off, because it recurs in nearly every news week, with a regularity that cannot be coincidental. It is the pattern of topic rotation, and it functions roughly like this. A certain crisis is handled by the media with high intensity, often over weeks or months, with daily updates, with experts in the studios, with emotional images on the front pages. Then, at a moment not explicable from outside, a new crisis arises, and the previous one disappears. It disappears not because it has been solved but because media attention moves on. The old crisis still exists, it continues to cause suffering, it continues to cost money, but it has become media-invisible, and with that it has ceased to exist for Otto sapiens.

A present example that any observer can confirm is the media treatment of the Ukraine war. In February 2022 the topic Ukraine appeared 15 times on the covers of international magazines like Time, Der Spiegel, The Economist and others. In the year 2025 it appeared 5 times, that is, one third of the original frequency (Brand Ukraine, 2025, How Ukraine Has Disappeared from International Magazine Covers). An independent Ukrainian research organization has documented that the number of publications in international media on the topic of Ukraine was already 2.5 times lower in 2024 than in 2023. In the summer of 2025 the Washington Times reported in a remarkably openly formulated article that the media had grown bored of the war, that the international magazines were instead occupying themselves with the Bezos wedding in Venice and with the important information that the bridal gown had required 900 working hours, while people continue to die daily in Ukraine (Washington Times, 30 June 2025, What happened to the media’s coverage of the Ukraine war).

This is not a political evaluation, this is an empirical observation. A crisis that does not end disappears from media consciousness as soon as it no longer serves the attention economy, that is, as soon as the peak of escalation is past and the laborious, drawn-out, undramatic course sets in. That is the point at which the media drop the topic because it no longer produces new, click-worthy headlines, and they turn to the next escalation. The next escalation can be a new conflict, a new virus, a new natural catastrophe, a new scandal. What exactly it is plays no role in the logic of the attention economy. What is important is only that it is new and that it produces images.

That is how it works. After the Coronavirus pandemic came for a while the monkeypox hype, which disappeared again within weeks, then came the bird flu topic, then Mpox, and meanwhile comes the Hantavirus, which entered the international headlines through a cluster on a cruise ship and is being treated for the time being as though it were the next great pandemic, although Hantaviruses have been known for decades and the epidemiological situation justifies no special excitement. While the Hanta topic was in the headlines, the Iran war disappeared almost entirely from coverage. It is not that nothing is happening in Iran anymore, on the contrary, the situation is unstable, there are regular escalations, but the topic no longer produces the headlines marketable in the attention market, and so it is switched off. It returns when the next dramatic incident occurs, and it disappears again as soon as the next crisis is more dramatic.

The American communication scientists Maxwell McCombs and Donald Shaw described this mechanism as early as 1972 in a classical study that entered the textbooks as agenda-setting theory (McCombs, M. E. & Shaw, D. L., 1972, The agenda-setting function of mass media, Public Opinion Quarterly, 36(2), 176-187). The idea is that the media do not primarily decide what people think but what people think about. What is not on the media agenda does not exist for Otto sapiens. What is on the agenda is the world for him. From this it follows that the media agenda, that is, the selection of topics prominently represented in the headlines, has a direct steering effect on public consciousness, and this steering is exercised not by a secret conspiracy but by a simple economic logic. What brings clicks comes to the front. What brings no clicks disappears.

Otto sapiens does not register this steering because he has no memory across multiple crises. His Stone Age brain can process one threat at a time, and as soon as the next one appears, the previous one is gone, not because it has been solved but because Otto sapiens’s cognitive capacity does not suffice to hold both at once. This has an evolutionary background. In a tribal society one had to concentrate on the current threat, the saber-toothed tiger in the bushes was more important than the theoretical question of whether there would be enough provisions in the winter. Whoever expended energy on both threats simultaneously did not survive the saber-toothed tiger. This circuitry is sensible for the Stone Age, it is catastrophic for the present, in which we live with twelve parallel crises and cannot simply ignore any one of them, because they all continue running, regardless of whether they are currently in the headlines.

ChatGPT always has an answer, so you have no more questions

We come now to the section that is to me the most important in this entire piece, because it describes the youngest and fastest-growing phenomenon accelerating the cognitive self-surrender of Otto sapiens, and that is generative artificial intelligence, in particular ChatGPT and its relatives. I write this as someone who himself employs artificial intelligence, who has programmed Tyra, who works with various large language models, who knows the technology from the inside and therefore is not arguing as a hysterical outsider but as someone who knows what he is talking about. Precisely for this reason I can say with precision where the problem lies, and it lies not in the technology, it lies in how Otto sapiens uses this technology.

An MIT study from the year 2025 led by Nataliya Kosmyna conducted a neurobehavioral experiment in which the participants used ChatGPT over multiple sessions for the handling of cognitive tasks. The researchers were able to show that the neuronal activity of the participants progressively decreased over the sessions, suggesting that the repeated outsourcing of cognitive work to AI leads to a measurable reduction of one’s own mental effort. What the researchers described as concerning was not the individual effect but the progressive nature of the decline over only a few sessions, suggesting that the cognitive outsourcing to AI can create a feedback loop in which users become increasingly dependent on external processing power at the cost of developing their own analytical abilities (Kosmyna, N. et al., 2025, MIT Media Lab Cognitive Engagement Study).

A further investigation by the Swiss researcher Michael Gerlich from 2025 systematically illuminated the phenomenon of cognitive offloading (Gerlich, M., 2025, AI Tools and Critical Thinking, Societies, 15(1), 6). Gerlich was able to show that the takeover of cognitive tasks by AI tools is accompanied by a measurable reduction of critical thinking, and that this effect is the more strongly marked, the more intensive the AI use. A Stanford investigation from 2025, published in a sister publication of Computers in Human Behavior, showed in a randomized controlled design that students who had unrestricted access to ChatGPT during learning performed significantly worse in a retention test after 45 days than students who had learned without AI. The researchers’ explanation is that during learning the AI reduces the encoding depth, that the hippocampus thereby forms weaker traces, and that forgetting sets in accordingly more quickly. Whoever learns with ChatGPT learns worse.

These findings are now so dense in the academic literature that the phenomenon has received its own term. Lazy thinking is the diagnosis, and it is confirmed by a growing number of studies showing that ChatGPT users in comparison with non-users exhibit lower argumentation depth, produce fewer of their own justifications, check fewer sources, weigh fewer counter-arguments. That is not surprising, because the AI removes the effort, and effort is the precondition for the brain to form neuronal traces. Without effort no trace. Without trace no memory. Without memory no experience. Without experience no metacognition. And without metacognition Otto sapiens cannot recognize that he is currently in the process of cognitively expropriating himself.

Here now comes the central pointe toward which this entire piece has been working. Otto sapiens does not question what ChatGPT answers him, because questioning is an operation requiring effort, and effort activates the same neurological circuits as the confrontation with a real threat. I take a sentence, I compare it with my own experience, I identify inconsistencies, I open the possibility that my previous assumption was wrong, and I take the risk of having to reorient myself. Every one of these operations demands of the brain a cognitive performance producing the same exhaustion feeling as physical exertion. It is energetically expensive. And it activates the threat system, because questioning places one’s own identity into question, and for the Stone Age brain one’s own identity is a survival good that must be defended.

Questioning therefore hurts. It is not a pleasant intellectual exercise but a small existential jolt the system feels and from which it shrinks. Whoever has trained metacognition has learned to endure this pain, and more than that, to recognize it as a learning opportunity. Whoever has no metacognition experiences the questioning as a threat and avoids it. Otto sapiens consistently avoids it, and ChatGPT permits him to perfect the avoidance. He asks the machine, the machine answers, the answer feels complete, so he takes it. He does not test it. He does not ask where it comes from. He does not ask whether it is correct. He takes it because it is there, and because taking takes less effort than testing.

This is the structural trap. ChatGPT is not the problem, ChatGPT is only the tool that closes the trap. The problem is the Stone Age brain that has not developed metacognition, that experiences effort as pain and avoids pain. In the moment in which a tool exists that removes the effort, this tool is used, and as soon as it is used, the ability to work without it atrophies further. It is exactly the same logic by which a muscle atrophies when it is not used. Whoever wears a cast on his leg sees after 6 weeks a noticeably thinner calf muscle, because the tissue is broken down that is not stressed. With the brain it is similar, with the difference that with the brain there is no cast that is taken off after 6 weeks. The cast stays on because it is comfortable.

And while Otto sapiens wears his cast, he believes he has gained something. He thinks ChatGPT makes him smarter, because he now gets answers more quickly without having to look anything up. He thinks he becomes more productive, because he can write texts in 5 minutes for which he previously needed an hour. He does not see that this speed signifies a cognitive impoverishment, because the texts are indeed finished more quickly but leave nothing in his brain anymore. He has not thought the texts, he has only forwarded them, and at the end of the day he does not know what he did today, because nothing about it has left a trace in his consciousness. He has become more efficient, but he has also become emptier, and he notices the emptiness only when it has grown to a critical density and can no longer be covered by the next reel.

Spanish flu, then Charleston, and history repeats itself

I come to the penultimate section of this piece, in which I want to draw the historical arc, and I do this not to deliver an academic enumeration of the past 100 years but because this arc shows what the diagnosis is heading toward. Otto sapiens does not learn from his mistakes, and for a structural reason that has to do with the here and now I introduced at the beginning of this piece. Learning requires memory, memory requires reflection in the present moment, and reflection is precisely what Otto sapiens avoids. Therefore he does not learn. Therefore he repeats the mistakes. Therefore he reacts after every crisis with the same euphoric exaggeration that prepares the next crisis.

The classical historical case is the Spanish flu and the Roaring Twenties that followed it. The pandemic of 1918 to 1920 killed worldwide between 50 and 100 million people, in the United States alone about 675,000, more than ten times as many as the First World War cost American troops. There followed a severe economic crisis from 1920 to 1921, which made the American gross domestic product collapse. And then, as though out of nowhere, came a decade that has become known in collective memory as the Roaring Twenties, with an economic growth of 43 percent between 1921 and 1929, with a Charleston-dancing youth, with a consumer goods explosion and a cultural mood of departure that knew nothing more of the pandemic. The Lost Generation that shaped this world was a generation that had experienced something it could not process and that therefore fled into a phase of maximum avoidance, in which nobody spoke anymore about what had been only a few years before the sole subject.

The American historian John M. Barry, author of the still-authoritative history of the Spanish flu, has pointed out that this connection between pandemic and subsequent boom is indeed popular but historically more complex than often presented, because between 1918 and 1923 there first lay a phase of great political unrest, with strikes, race riots and the economic depression. Only afterwards came the spirit of departure. This point is important because it makes visible a pattern that is psychologically significant. A phase of threat is followed first by a phase of processing, which is often chaotic and painful. When this processing does not take place because society cannot or will not perform it, then there follows a phase of repression that shows itself in an exuberant consumer and risk delight. This repression is not coincidental, it is the psychological answer to a traumatic experience that has not been integrated.

The Yale sociologist and physician Nicholas Christakis, in his 2020 book Apollo’s Arrow, in which he placed the Coronavirus pandemic into a historical pattern, predicted that the acute phase of the pandemic would be followed by a phase of heightened religious and cautious behaviors, and that this phase would invert into its opposite around 2024, with a wave of risk delight, sociability, consumption and sexual liberality (Christakis, N. A., 2020, Apollo’s Arrow, Little, Brown Spark). That is exactly what occurred. Travel activity in 2024 exceeded the pre-pandemic level, consumption rose, risk willingness on financial markets climbed to historical highs. Nobody speaks of the pandemic anymore. Nobody reflects on the lessons that might have been drawn. It is as though it had never been.

Why is this so? For two reasons that interlock. First, the dopaminergic reward system responds to deprivation with a sensitivity shift that produces an exuberant reaction to stimuli after the deprivation phase. One knows this from addiction research, where after a phase of abstinence the next encounter with the substance often unfolds a stronger effect than before. On the societal level this mechanism functions in the same way. A phase of restriction is followed by a phase of exaggeration, because the reward system reacts oversensitively to stimuli. Second, and this is the heavier point, the human brain does not learn of its own accord from negative experiences but only when it appropriates these experiences in a reflected mode. When the reflection does not take place, the experiences indeed leave traces in autobiographical memory, but they are not converted into a teaching scaffold that could guide future behavior. They remain episodes, not lessons.

This is precisely the case with Otto sapiens. The pandemic was for him an episode, not a teaching. As soon as it was over, it was stowed back into the drawer of unpleasant memories, and life went on. That the same mechanisms that triggered the pandemic, namely global mobility, the interventions in wildlife habitats, the inadequate preparation of health systems, continue to exist, that the next pandemic is therefore only a question of time, he has not processed, because that would have required a reflective operation he does not perform. He reacts to the next crisis with the same surprise with which he reacted to the previous one. He exaggerates between the crises, and he is startled during the crises. He lives in a permanent back-and-forth between euphoria and fear, and he calls that living.

This is not only a historical observation, this is a diagnosis of the species’ prospects. If Otto sapiens does not learn, that is, if he proves structurally incapable of reflection, then he is also incapable of correction, and a species that cannot correct its mistakes is evolutionarily condemned. It will not perish through external catastrophe but through the cumulative effect of its own uncorrected mistakes, which superimpose themselves until the system collapses under their weight. That is the diagnosis. It is not dramatically formulated, it is sober. Homo sapiens will not die out from pandemic, war or climate catastrophe, he will die out from the voluntary surrender of thought. What will come after him is not the superman but Otto sapiens, a being who looks like Homo sapiens but inwardly is only an interface between his smartphone and his pizza anymore.

A warning to all who have read this text to this point

Before I come to the conclusion, I insert a section that stands at this point in every one of my longer pieces and that I write today with particular satisfaction, because it contains the polemical pointe against which Otto sapiens has no defense. It is at the same time a test, a small self-check for every reader who has come this far, and it functions as follows.

Whoever has read this text to this section without checking the smartphone in between, without opening another tab in the browser, without resolving to check later with ChatGPT whether everything claimed here is correct, has passed the test. He is not the Otto sapiens of whom this is the subject. He is a human being still capable of taking in a longer text without getting out repeatedly along the way, and that is today a faculty that has become as rare as the quiet reading of a book on a still evening.

Whoever has not passed the test, that is, whoever interrupted multiple times, whoever consumed parallel stimuli, whoever made notes in between for ChatGPT to have the claims checked, belongs to the target group of this piece. He is not obligated to be offended, because he is not in any case, since he lacks the metacognition that would be necessary to feel offended by a diagnosis he does not understand. He will probably not read this piece to the end, because it is longer than his attention span, and should he read it to the end, he will either not understand it at all or misunderstand it as a personal attack, because he lacks the tool with which he could distinguish the diagnosis from the polemic.

It is a small, closed pointe, and it is very old, going back to a thought experiment Socrates already knew. Whoever has the illness cannot recognize it, because the recognizing is part of what the illness destroys. The diagnosis is therefore always also a kind of trap in which the patient catches himself when he reacts. Whoever reacts with the words this does not apply to me, I use ChatGPT responsibly, I do not scroll through Instagram all day, has not understood the pointe, because he would not have to pose the question of responsibility if he had metacognition that would show him whether he actually uses it responsibly or whether he only believes it. Whoever has metacognition knows he has it. Whoever has none does not know it, because he lacks the tool with which he would recognize his own deficiency.

Thus ends the polemical warning. The last word goes to the wall clock.

Closing word, with Bandit and popcorn in the front row

I am sitting, as I finish this piece, again in my study, the old wall clock is ticking, Bandit lies at my feet and sleeps, his breath raising his flank in a calm, predictable rhythm that does not run in sync with the tick of the clock but produces its own complementary time structure, in which the world stands still for a moment. On the side table stands a small bowl of popcorn that Loui made for me half an hour ago, because she knew I was writing and that the popcorn is a small ritual that makes the writing sessions nicer. It is 22:14 on a Wednesday evening in May 2026, and the world out there is exactly the world I have described in this piece, with all the Otto-sapiens symptoms, with the pizzeria scenes, with the Bild headlines, with the reels, with ChatGPT, with the experts who are no experts, with the next virus, the next war, the next scandal. I know this world. I observe it daily. I have learned to look at it from the front row without being drawn into it.

Bandit turns in his sleep, sighs softly, lays his head down again. He is a Malinois, a Belgian Shepherd, a breed known for its alertness, and he did not learn his alertness from us but in an earlier life that ended 2 years ago, when the protection-dog unit of the German Bundeswehr in which he served was dissolved for cost reasons. We took him on then, a dog with professional training, with everything that goes with it, and he has been my best friend ever since, which describes a relationship that, for anyone who knows a Malinois, requires no further explanation. He has, in the 2 years he has been with us, accustomed himself to a sleep mode that is interrupted only by real threats, not by every random vibration. He has something Otto sapiens has lost, namely the ability to distinguish between relevant and irrelevant stimuli and to let himself be interrupted only by the relevant ones. I envy him sometimes, and I resolve at the same time to learn from him, which is a strange constellation, because a human being should not actually have to learn from a dog how to live in his own time. But we have come so far that this constellation is no longer strange, but instructive.

I ask myself, while the wall clock ticks on, where this madness ends, and I admit honestly that I have no answer. I do not see how Otto sapiens will free himself from his situation under his own power, because the conditions under which liberation would be possible are currently being systematically destroyed by him. Neither do I see that a political movement could compel this liberation, because political movements need majorities, and Otto sapiens is the majority. I see individual people awakening, laying their smartphones aside, using ChatGPT deliberately rather than blindly, learning again the here and now. I see them because they answer me in the comments of my blog posts, because they write me emails, because they recommend me books I do not yet know. They exist. But they are rare, and they are becoming rarer.

The announced book Das Hamsterrad, on which I have been working for some months and which will appear in the coming months, will devote itself exactly to this question with a depth not possible in a blog post. It will dissect the architecture of the hamster wheel in which Otto sapiens lives, and it will show that this hamster wheel is not a coincidental by-product of consumer society but its business model. Otto sapiens is not a regrettable accident, he is a desired result. A society that educated its citizens to thought would have a problem selling Bild-Plus subscriptions, Instagram advertising, ChatGPT subscriptions, Amazon orders. A society that enabled its citizens for metacognition would have a problem with every form of political manipulation, with every form of media steering, with every form of economic exploitation. Society therefore does the exact opposite. It cultivates Otto sapiens, because Otto sapiens is the most economically and politically functional subject a consumer society can produce.

Whoever has understood this knows why the hamster wheel will not be stopped by political reform, because the political class itself is part of the business model. He knows also why the hamster wheel will not be stopped by technical reform, because the technology is being developed onward in precisely the direction that makes the hamster wheel more efficient. And he knows that the only possibility of liberation lies in individual exit, in the conscious decision not to place the smartphone in the pocket, not to click the Bild-Zeitung, not to use ChatGPT as a truth oracle, not to adopt the media agenda as one’s own. Exit is possible. It is not easy. It requires discipline, and discipline requires metacognition, and metacognition must be trained, and training requires effort, and effort produces fear, and the circle closes. Whoever achieves the exit has reached something rare. Whoever does not achieve it remains in the hamster wheel. That is the sober diagnosis with which I close this piece.

I take another piece of popcorn from the bowl, listen to the wall clock now showing 22:27, look at Bandit who sleeps on peacefully, and think of the mozzarella in the pizzeria, which on Saturday will lie on my plate again. I will enjoy it, because I will taste it, and one can taste only in the here and now. Yesterday is history, tomorrow is a mystery, today is a gift, that is why it is called the present. That is the only wisdom Otto sapiens would need to save himself. And it is the only one he will not hear.

Tick. Pause. Tock. Pause.

Enough for today.

References

  • Butler, H. A., Pentoney, C., & Bong, M. P. (2017). Predicting real-world outcomes: Critical thinking ability is a better predictor of life decisions than intelligence. Thinking Skills and Creativity, 25, 38-46.
  • Castelo, N., Kushlev, K., Ward, A. F., Esterman, M., & Reiner, P. B. (2025). Blocking mobile internet on smartphones improves sustained attention, mental health, and subjective well-being. PNAS Nexus, 4(2), pgaf017.
  • Christakis, N. A. (2020). Apollo’s Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live. Little, Brown Spark.
  • Cytowic, R. E. (2025). Your Stone Age Brain in the Screen Age. MIT Press.
  • Dunbar, R. I. M. (1992). Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22(6), 469-493.
  • Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6.
  • Gregory, M. D., Kippenhan, J. S., Eisenberg, D. P., Kohn, P. D., Dickinson, D., Mattay, V. S., Chen, Q., Weinberger, D. R., Saad, Z. S., & Berman, K. F. (2017). Neanderthal-Derived Genetic Variation Shapes Modern Human Cranium and Brain. Scientific Reports, 7, 6308.
  • Green, R. E., Krause, J., Briggs, A. W., et al. (2010). A draft sequence of the Neandertal genome. Science, 328(5979), 710-722.
  • Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation. Psychological Bulletin, 116(1), 75-98.
  • Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S., & Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences, 97(8), 4398-4403.
  • McCombs, M. E., & Shaw, D. L. (1972). The agenda-setting function of mass media. Public Opinion Quarterly, 36(2), 176-187.
  • Neubauer, S., Hublin, J. J., & Gunz, P. (2018). The evolution of modern human brain shape. Science Advances, 4(1), eaao5961.
  • Simonti, C. N., Vernot, B., Bastarache, L., Bottinger, E., Carrell, D. S., Chisholm, R. L., Crosslin, D. R., Hebbring, S. J., Jarvik, G. P., Kullo, I. J., Li, R., Pathak, J., Ritchie, M. D., Roden, D. M., Verma, S. S., Tromp, G., Prato, J. D., Bush, W. S., Akey, J. M., Denny, J. C., & Capra, J. A. (2016). The phenotypic legacy of admixture between modern humans and Neandertals. Science, 351(6274), 737-741.
  • Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research, 2(2), 140-154.
  • Washington Times. (2025, June 30). What happened to the media’s coverage of the Ukraine war.