Why the Singularity Still Isn’t Near
In recent years a curious constellation of ideas has emerged from the tech-elite orbit: a world in which humanity is no longer seen as a given, but as a species to be upgraded, transcended or even abandoned in favour of a “post-human” future. At its core sit the ambitious hopes of the “upload your mind”, “colonise Mars”, “create god-like artificial intelligence” crowd. Some critics have captured this cluster with the acronym TESCREAL, which stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism.
What makes TESCREAL more than just science-fiction fantasies is the fact that influential figures, from tech founders to VC backers, are either openly promoting or at least aligned with its themes: human enhancement, radical life extension, the primacy of “future generations” over the present, and the proposition that surrendering to super-intelligence is not only inevitable but desirable. This backdrop helps explain why the vision laid out by Ray Kurzweil in The Singularity Is Near (and his newer sequel) wasn’t born in a vacuum. It is part of a broader movement, at once utopian, technocratic and sometimes unsettlingly detached from the messy biological realities of human experience.
Kurzweil’s central claim is ambitious. He argues that thanks to accelerating returns in computing, genetics, nanotechnology and robotics, we will soon cross a threshold: artificial intelligences will not only equal but merge with human intelligence, yielding a new epoch in which human consciousness is no longer strictly biological. He predicts a date, often cited as 2045, for the event he calls the Singularity, a moment when humans and machines fuse, mortality becomes optional and intelligence expands perhaps a million-fold. At the same time, this narrative carries many of the familiar features of myth: an epoch-ending transformation, an exodus from the flawed human world to a “better” one, a redemption or transcendence of our biological state. Critics argue that the singularity story echoes apocalyptic religious narratives rather than purely scientific forecasting.
To evaluate this seriously, though, we must bring it down to earth. What does it really mean to upload a mind, transcend biology or merge with machines? And does the evidence from neuroscience, psychology and human embodiment support the assumptions beneath the headline? From my vantage as a psychology teacher who has just covered memory with a Year 12 cohort, the short answer is: the more you know about the brain, the more the promise starts to unravel.
One major conceptual leap-zone in Kurzweil’s argument is the idea that consciousness, identity or memory are essentially informational - data that can be scanned, transferred, stored, duplicated and re-instantiated in another substrate (be that silicon, cloud or nanobot swarm). But real-world neuroscience tells a different story. Neurons are not digital switches, but complex analog devices, deeply integrated with biochemistry, hormones, glial cells, micro-environment and an ever-changing web of connections that map not only what we know, but how we feel, how we respond, what our bodies have borne. Memory is not simply a “file” waiting to be copied. It is embedded in networks of synapses shaped by experience, emotion, stress hormones, sleep quality, nutrition, microbiome signals and body-brain feedback loops. To treat memory as an abstraction you can detach from the body is to ignore everything we’ve learned in biology and psychology. It also ignores the profound role of embodiment: the fact that our mind is not floating free, but woven through our living tissue, our endocrine system, our metabolic state and our sensory situation.
The idea of an “upload” therefore faces multiple problems: first, how do you fully capture all the parameters—neural connectivity and biochemical context? Second, once captured, how do you instantiate that structure in a substrate that reproduces all the messy embodied feedback loops? Third, even if you could do both, would the resulting entity be you, or just a clone that wakes up believing it is you? Philosophers call this the identity problem. In teaching memory I emphasise that even the simplest models (neurotransmitter release, synaptic plasticity) are still only partial. The leap from that to “copy consciousness wholesale into the cloud” requires assuming away decades of unresolved hard problems: what consciousness is, how embodied minds emerge, how they integrate with emotion and meaning. Kurzweil’s theory ignores or downplays almost all of these.
There are other key critiques of Kurzweil’s approach that deserve mention. One is the so-called exponential-growth fallacy, the idea that because computing power or genomics or robotics have grown rapidly, that growth will continue unabated and hit a vertical “knee” in the curve. But as critics note, exponential growth rarely lasts - resource constraints, complexity limits and diminishing returns all militate against this narrative.
Another critique: Kurzweil has a track-record of predictions that did not pan out in the timeframe he projected. His methods rely on cherry-picked events, log-log plots and hindsight-fitted “laws” rather than rigorous scientific models.
Additionally, the singularity vision bears uncanny resemblance to religious myth: it presents a “rapture” of intelligence, human transcendence and immortality, a narrative that feels more faith-based than empirically grounded.
When we map Kurzweil’s claims into the larger TESCREAL bundle we see how he fits within a broader ideological ecosystem. TESCREAL captures overlapping beliefs of transhumanism (we must enhance the human), extropianism (infinite progress), singularitarianism (intelligence explosion), cosmism (space colonization, post-human futures), rationalism (cognitive optimization), effective altruism (future lives matter more than current ones) and longtermism (our duty is to the far future). Tech elites who invest in brain–computer interfaces, life-extension ventures or colony rockets often cite these ideas implicitly if not explicitly. They frame the future as a project of exit: from Earth, from body, from democracy, from mortality. This matters because it’s not just about what we can do, but what we should do, and whose values get centered in that decision.
My vantage as a teacher of psychology emphasises that while the grand narratives of TESCREAL may hold appeal for the ultra-wealthy, they sideline the human dimension: vulnerability, ageing, suffering, embodied memory, relational identity. When you reduce humans to computation, you lose the texture of what it means to live, to fail, to learn, to be finite. This is not just a philosophical complaint—it has material implications. If priority and capital shift towards “upload futures”, will the lives of millions of ageing, embodied, fragile humans receive the same attention? What happens to the forgotten parts of our world while the singularity-train departs?
Ultimately the question is not just “Is the singularity possible?” but “Why do we treat it as inevitable?” Because if we buy the inevitability narrative, we surrender democratic oversight, ethical deliberation and pluralism in favour of a future shaped by a handful of tech believers. The messy, complex human condition gets sidelined while the next phase of history is engineered by a few.
So what’s the takeaway? It’s that the promise of transcendence is seductive—after all, who wouldn’t want to live forever or think faster? But seductive doesn’t mean sustainable. From memory research to neuroscience, the body–brain connection remains stubborn. Any narrative that treats consciousness as software and the body as optional hardware deserves rigorous scrutiny. The fact that we’re still marveling at the Singularity means we’re still ignoring what we know about being human.
Further Reading
- Becker, Adam. More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. 2025.
- Torres, Émile P. The End of Humanity: AI, Transhumanism, and the New Religion of Silicon Valley. 2024.
- Bhaskar, Michael. Human Frontiers: The Future of Big Ideas in an Age of Small Thinking. 2021.
- Smith, Gary. The AI Delusion. 2018.