The Singularity Is a Mirror
Computer scientist Ray Kurzweil’s 2005 book The Singularity Is Near was a landmark in technological thought. In that book, Kurzweil summed up the progress of computer and other forms of technology in order to formulate a vision for the future of human beings in an increasingly tech-heavy world. Shortly before the middle of the 21st century, Kurzweil famously predicted, humans will merge with machines. By that time of convergence (the Singularity), machine intelligence will have so far outstripped human intelligence that humans will gradually abandon their biological frames and upload their consciousnesses into deathless silicate and other non-biology-based networks.
When I first read The Singularity Is Near nearly two decades ago, much of it sounded like science fiction. To give just one example, Kurzweil argued in that 2005 volume that nanobots—tiny machines visible only under a microscope—would one day flood human bloodstreams, eliminating diseases with pinpoint accuracy and thereby extending the lifespan of the human body to Methuselah-like realms.
As outlandish as these and other predictions seemed at the time, though, I could not argue with one of the two fundamental premises of The Singularity Is Near. As Kurzweil says, technology, especially computing power, has been progressing, and accelerating with such formidable and relentless momentum that a day is surely coming when computers will be able to pass the Turing Test, the standard developed by the late computer scientist Alan Turing (1912-1954) for determining when, under certain conditions, it has become impossible to tell the difference between a human mind and an artificial neural network.1 (In Kurzweil’s 1999 book titled The Age of Spiritual Machines, he predicted that 2029 would be the year that computers pass the Turing Test.) This seems plausible because computer processing power has been growing by leaps and bounds. Moore’s Law, which is more of an observation of past results than a rule about future progress, holds that the number of transistors packed onto a computer chip doubles about once every two years.2
There are physical limits to Moore’s Law, of course. For example, when transistors reach the size of atoms, it will become impossible to pack any more of them onto a single chip. And some people, such as Massachusetts Institute of Technology professor Charles Leiserson, think Moore’s Law lost its predictive power around 2016, meaning that now only increases in scale can power further computing development.3 But even given the sunset-term built into Moore’s Law, advances in quantum computing and other technologies strengthen Kurzweil’s original thesis that machines are getting faster and smarter all the time. Anyone who doubts whether computers have made enormous advances these past few decades should find an old MS-DOS machine from the 1980s, boot it up (if it still works) and play around with the floppy disk drive for a while, and then have a conversation on a 2025 machine with ChatGPT. The level that computers have reached in just the past few years alone is not even awe-inspiring any longer—it is downright spooky, even terrifying.4
But while Kurzweil’s first main thesis from his 2005 book is plainly true, it is the second thesis that tripped me up then—and still does. Kurzweil’s Singularity (and not just his—many others have argued for the same or similar things before and after The Singularity Is Near first came out) rests not just on the notion that computers are getting faster, which they are, but on a second premise, namely that computers and people can somehow come together in the future, becoming one thing: a “singularity” of man and machine. There is no evidence that this is possible. There is much evidence, to the contrary, that it is not. And yet, Kurzweil seems to have let his faith in technological progress overcome attention to more basic philosophical questions.
In his 2024 follow-up to the 2005 volume, The Singularity Is Nearer, Kurzweil takes stock of how his predictions have fared after nearly twenty years. The results are impressive when it comes to Kurzweil’s first thesis, that computers are improving. Kurzweil zooms out in The Singularity Is Nearer to take in human progress in a myriad of other ways as well. Over eight chapters, Kurzweil outlines how our lot as human beings has generally been improving. Kurzweil references the work of cognitive psychologist Steven Pinker and other Enlightenment-positive optimists in arguing that from crime to poverty to education, the world is, on the whole, becoming a better place.5 The Singularity Is Nearer sets up a reinforcement loop between an improving global society and improving computing power to posit the Singularity as a matter of time, an event already approaching. “Human biology is becoming better understood,” Kurzweil argues in the Introduction, while “computer power is becoming cheaper” and
. . . engineering is becoming possible at far smaller scales. As artificial intelligence grows in ability and information becomes more accessible, we are integrating these capabilities ever more closely with our natural biological intelligence. Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us. This will expand our intelligence and consciousness so profoundly that it’s difficult to comprehend. This event is what I mean by the Singularity.
Kurzweil hinges the Singularity on the “law of accelerating returns,” of which Moore’s Law could be said to be but one part. Taken more broadly, this law of accelerating returns guides our expanding consciousness along what Kurzweil calls “six stages,” a key idea from The Singularity Is Near to which Kurzweil returns in Chapter One of his new book (“Where Are We in the Six Stages?”) for a reassessment. It is here that Kurzweil seeks to set the stage for the Singularity, but it is also here that we can see Kurzweil’s second main premise—that humans and machines can “merge”—begin to come apart. I turn to this in more detail below, but suffice it to note here that Kurzweil’s conception of consciousness as basically “information” presents a serious problem for the Singularity. It all starts with Kurzweil’s first stage, or “epoch,” which began with “the birth of the laws of physics and the chemistry they make possible,” something that happened beginning “a few hundred thousand years after the big bang.” Here, Kurzweil makes a strange— and, I think, for his thesis, fatal—remark. “‘Whoever,’” Kurzweil writes, using scare quotes, “designed the rules of the universe” also arranged the initial atomic forces, thus making subsequent physical “evolution through atoms” possible. This “Whoever” implies that the universe is the product, the creation, of a mind. But mind and information are two entirely different things. The former is necessarily prior to the latter. Kurzweil, then, puts the cart before the horse in arguing that information, worked correctly, can produce a superior mind.
The rest of the six stages follow from this wrongfooted start. In the “Second Epoch,” Kurzweil explains, we get life, arising out of complexifying molecules self-braiding into strands of DNA. As he imagines it, information is slowly becoming matter’s master. “In the Third Epoch,” Kurzweil continues, “animals described by DNA then formed brains, which themselves stored and processed information,” thereby providing “evolutionary advantages,” which in turn contributed to further brain development. Humans represent the Fourth Epoch, Kurzweil says, when “higher-level cognitive ability” and “thumbs” allowed animals “to translate thoughts into complex actions.” With Homo sapiens, information broke out of its biological confines and leapt into the wider world as humans
. . . create[d] technology that was able to store and manipulate information—from papyrus to hard drives. These technologies augmented our brains’ abilities to perceive, recall, and evaluate information patterns. This is another source of evolution that itself is far greater than the level of progress before it. With brains, we added roughly one cubic inch of brain matter every 100,000 years, whereas with digital information we are doubling price-performance about every sixteen months.
In these first four stages or epochs, it becomes clear that Kurzweil perceives a strange relationship between material and information, with information eventually turning back around to manipulate its material base in pursuit of ever faster and higher iterations of itself. Information mastered matter, and then used matter to supercharge information—but the relationship between matter and information is never properly explained. Kurzweil also makes an illicit shift between “brain” and “mind,” viewing both as subordinate to an almost magical force he calls “information.” This muddled thinking continues into, and makes possible, the Fifth Epoch: the Singularity. This is when “we will directly merge biological human cognition with the speed and power of our digital technology [achieving] brain-computer interfaces.” Finally, in the Sixth Epoch, “our intelligence spreads throughout the universe, turning ordinary matter into computronium, which is matter organized at the ultimate density of computation.” Born of information, then, our brains, and minds, immerse themselves and us (whoever we are) in information’s endless quest to realize itself more universally.
In Chapter Two, “Reinventing Intelligence,” Kurzweil explicates this transformation, from information rooted in biology to information digitized and roaming freely and deathlessly across the cosmos. Kurzweil explains that artificial intelligence (AI) represents a crucial development in the shift from biological to digital intelligence. However, in the history of the AI revolution that Kurzweil lays out, careful readers will be able to see that the AI we encounter in 2025 is not a fellow human mind, but merely a replication of the human brain. That Kurzweil also fails here to see the difference between brain and mind is a further indication that the Singularity he envisions is never going to come about.
To understand more fully why the Singularity is singularly impossible, we have to follow Kurzweil in his life’s work on this subject. Kurzweil traces the seeds of the current AI boom to work done by computer scientists Frank Rosenblatt (1928-1971) and Kurzweil’s MIT mentor, Marvin Minsky (19272016). “Minsky,” Kurzweil writes, “taught me that there are two techniques for creating automated solutions to problems: the symbolic approach and the connectionist approach. The symbolic approach describes in rule-based terms how a human expert would solve a problem,” such as by breaking mathematical solutions down into axioms and then using those axioms to solve other math problems from a generalized starting point. But this approach has a built-in limit, namely that complexity swamps problem-solving operations as machine thinking runs up against the highly intricate, recondite, tricky, non-explicit nature of the real world. As Kurzweil explains, a computer scientist named Douglas Lenat (1950-2023) and others created a computer system known as “Cyc,” from “encyclopedic,” which seeks to “encod[e] all of ‘commonsense knowledge,’” such as that dropped eggs break, into language that a computer will understand and be able to apply when formulating program-based models of real-world events.6
A moment’s thought will reveal that the symbols we humans use—everything from alphabets to analogical reasoning—are connected to an almost infinite Indra’s Net of rich meaning (which, like mind, is completely different from information). That we must teach computers that eggs break when we drop them is a very good commentary on the gap—an unbridgeable one, I think—between AI and the human mind. The brain works, not because it has tremendous computing power (although it doesn’t do half bad for a wet hunk of biological material), but because it is animated by soul and mind, immaterial things that no amount of silicon will ever be able to replicate. We know that eggs break because we live physically and mentally, spiritually and emotionally, in the world in which this sometimes happens. We have never seen a dinosaur egg break, but if there were any dinosaurs and we found their unhatched eggs, we would know that those eggs must be fragile, too. We know that fossilized dinosaur eggs must once have been breakable as well. No one ever wrote this down for us to remember. It’s something we learn as humans, because all things have meaning for us, and all symbols are rooted in this, our significant and signified world.
Our minds know things. Our human minds. Not so for computers, which process not meaning, but mere information. For the Singularity to happen, either humans must become computers, or computers must become humans, but if either of those things occurs, then the Singularity becomes meaningless, so we have a kind of negative tautology. This is Kurzweil’s dilemma, one which he himself has experienced throughout his long and illustrious career in computer science and invention. The very need for a Turing Test arises because computers and people are different, and because computers can imitate people but not become them. Computers, ironically, administer to human users something called CAPTCHAs (Completely Automated Public Turing Tests to tell Computers and Humans Apart) precisely because humans, not computers, know how to extract meaning from the grainy photos and wavy alphanumeric characters that such tests use. Computers don’t understand those photos and symbols because computers don’t have minds. They are separated from us in that way forever.
The other approach to AI, connectionism, is what has allowed the development of the truly astounding level of computer intelligence we see today. Through “neural nets” stacked and intertwined at increasing levels of internal complexity, machines can use that complexity—slowly and cumbersomely at first, but with increasing and even startling speed—to find solutions to problems on the nets’ own terms. Connectionism, Kurzweil writes, fell out of favor after Marvin Minsky’s 1969 criticism of “the surge in interest in this area, even though he had done pioneering work on neural nets in 1953.” The problem, Minsky and MIT colleague Seymour Papert (1928-2016) determined, was that single-layer neural nets can’t solve certain kinds of problems due to the lack of feedback, that is, of reinforcing-type machine learning, in the single-layer models.7 According to Kurzweil, hardware had to advance to a point where multi-layer neural networks became possible. After this point was reached around the mid-2010s, machine intelligence began to take off, reflecting the steadily accelerating nature of information dissemination and processing in the material universe, with “hundreds of millions to billions of years” required initially for matter “to create a new level of detail.” As with current AI systems, hardware needed to catch up with information, so that information could use hardware (computer networks, human brains) to raise itself up to greater and greater heights.
As information was born out of matter, and then, through the vehicle of biology (thumbs and brains), turned back on itself in a reflexive strengthening maneuver designed to intensify the evolutionary process, the power of the human brain, itself both a product and multiplier of this evolutionary power, became apparent. This is Kurzweil’s main jam. Eventually, “evolution,” Kurzweil says—making the process the agent, as evolutionists are wont to do—“needed to devise a way for the brain to develop new behaviors without waiting for genetic change to reconfigure the cerebellum. This was the neocortex.” This new design “was capable of a new type of thinking: it could invent new behaviors in days or even hours. This unlocked the power of learning.” Machine learning using multi-layer neural networks “recreat[es] the powers of the neocortex,” allowing AI to make the accelerating jumps in ability that it has in recent years. AI cannot understand symbolic thought, in other words, but it can use material power to foster a silicon-based intelligence that can take on, and outdo, some aspects of the human mind. AI can digitally fudge mind by harnessing the multi-layered, multi-connected power of electronic switches. It makes a self-contained Indra’s Net and then, ignoring the fundamental split, acts as though the AI world and the human world are one. AI blinds us with science. It uses sheer speed to pretend to be an animate, ensouled being. It is this non-human intelligence, shorn of the darker aspects of human nature, that Kurzweil projects into the future, adding the plot twist of the Singularity when, he says, we and our AI imitators will “merge” into one superior intellectual force.
Most of the rest of The Singularity Is Nearer (with the exception of Chapters Three, Seven, and Eight, to which I turn later) continues along this optimistic line of reasoning. Chapter Four, “Life Is Getting Exponentially Better,” is a further Pinkerite celebration of the good news of the world, such as that poverty levels are decreasing and life expectancy is increasing. This good news gets buried under the bad news on which, according to Kurzweil, we are evolutionarily predisposed to focus, as “pay[ing] attention to potential challenges” has long “been more important for our survival.” But it’s the good news that counts, as we are all moving toward a happier time when we and machines can finally combine.
Chapter Five, “The Future of Jobs: Good or Bad?” is similarly sunny, following the standard creative destruction line (one of the chapter’s subsections is even titled “Destruction and Creation”) in arguing that “the convergent technologies of the next two decades will create enormous prosperity and material abundance around the world. But these same forces will also unsettle the global economy, forcing society to adapt at an unprecedented pace.” AI, Kurzweil predicts, will threaten with extinction or disruption a long list of occupations, from truck driving to factory work. But just as in the past most people were farmers, while now very few are, Kurzweil predicts that the AI revolution will eventually work out for the better for the labor force as a whole. Not only that, but having AI do more work for us will free us up for artistic and other cultural pursuits, as well as making possible a universal basic income from profits generated by automation.
Chapter Six, “The Next Thirty Years in Health and Well-Being,” is a more detailed look at how nanotechnology and other high-tech innovations will help humans develop new drugs and defeat disease, including mental health disorders. All in all, the future Kurzweil foresees is bright, and the Singularity will be a tremendous boon for mankind, as well as an acceleration of the positive feedback loop, a function of the benevolence of information, in which we have the good fortune to live.
Kurzweil is an optimist, but not a Pollyanna. In Chapter Seven, “Peril,” he addresses some possible nightmare scenarios for the future as AI takes more and more control of our lives. Nuclear war, for example, and the possibility of creating “supervirus[es]” as weapons, loom on the horizon. Nanotechnology, too, Kurzweil admits, could be weaponized. And then there is the possibility of “gray goo,” resulting when “self-replicating machines that consume carbon-based matter and turn it into more self-replicating machines [. . .] lead[s] to a runaway chain reaction, potentially converting the entire biomass of the earth to such machines.” But, Kurzweil counters (citing the work of Robert A. Freitas), “blue goo,” comprising “defensive nanobots,” could be “dispersed optimally around the world.” This would allow the good goo to overpower the bad goo, thereby saving the world (and potentially the universe) from being overrun by malicious nanobot swarms.8 AI in general, Kurzweil argues, can be trained to be moral and democratic to ensure “value alignment” between AI and human ethics.9 So, while the future has some blemishes, there is nothing in Kurzweil’s Singularity vision fundamentally stopping the ever-upward progression of man joining together with machine. Likewise, in Chapter Eight, “Dialogue with Cassandra,” Kurzweil answers some objections from a fictional skeptic about the timing of the merger between computers and the human neocortex, and also about the identity of such a hybrid. Here, too, we learn that there is nothing to fear. AI will allow humans to enhance and expand their human capacities while exponentially augmenting and accelerating original mental capabilities and physical longevity.
These seven chapters, which track Kurzweil’s decades-old prophecy of a blended tech and humanity farther into the future and in more detail, shape the contours of The Singularity Is Nearer. Kurzweil remains optimistic about tech and also about human beings. Or perhaps it is more accurate to say that, thanks to the power of Kurzweil’s “information,” he has never made much of a distinction between computers and people to begin with. At any rate, much has changed since Kurzweil’s 2005 The Singularity Is Near, with AI now verging on overtaking human ability in many areas, if not already having surpassed it. But Kurzweil’s basic view of technology, and of human nature, remains remarkably consistent: Things are getting better all the time, despite occasional hiccups. AI will be a big help in our quest for a fairer world. All will work out well in the end, “the end” being a limitless upsweep of improvement across the board, jointed by a moment in evolutionary history, a Singularity, when information’s two offspring, humans and computers, will come together in a marriage of supreme happiness.
This kind of relentless, even ruthless optimism is deemed “futurist” by many Kurzweil interpreters, but in many ways what Kurzweil offers is simply a recapitulation of the political moment, now largely passed, in which it was possible to speak of history as a benign process and of the ages to come as waiting patiently and gently for us to reach them. To put it more bluntly, in Kurzweil’s work one detects the smugness of the late-capitalist liberal, secure in his certainty that the way he thinks the world ought to be is an axiomatic must for everyone on the planet. Kurzweil thinks he has given the world a window into time to come, and in many ways he has. But he has also given us, if we choose to look for it, a mirror, which shows us to be as prideful and conceited as ever.
Human nature is much darker than Kurzweil seems willing to admit. The argument, advanced by Kurzweil, Pinker, and many other liberals, that our species is becoming steadily less violent and more humane, is belied by the billions of children worldwide whom the abortionist has kept from seeing the light of day. Kurzweil’s optimism is nice to read, but it is at best a partial glimpse of a species (us) that has a very twisted heart and has used technological advances for evil as well as for good. Kurzweil is betting on the good side of human nature to win out, but it seems to me that the good and bad sides are inseparable, and so whatever future mankind has will be both hightech and fraught with danger.
But over-optimism is just one problem with the Singularity idea. In addition to many smaller flaws of logic and fact, the one major flaw in Kurzweil’s reasoning destroys the very reason for having embarked on the Singularity quest in the first place: He does not know what a human being is. Overoptimism is one thing, but a category error is another.
In Chapter Three of The Singularity Is Nearer, “Who Am I?,” Kurzweil takes up the crucial question of identity, including consciousness. Here Kurzweil balks, and balks badly. “Despite its unverifiability, consciousness cannot simply be ignored,” Kurzweil writes. “We view material objects, no matter how intricate or interesting or valuable, as important only to the extent that they affect the conscious experience of conscious beings.” Kurzweil is not a hidebound materialist, then, something that can also be gleaned from his notion that information arises from the material substrate—mind seeping out of atoms and molecules like ghosts from cemetery earth—but it is not identical with the material realm. However, it would have been much better for Kurzweil had he been a thoroughgoing materialist, for then he would have been dealing with the ancient roadblocks, ones encountered by other materialists from Democritus to Dawkins, in thinking about how the mind, a clearly non-material thing, works in a substance-only universe. As a materialist, Kurzweil might have sought to go around those roadblocks by ignoring them, as materialists often do, pretending that the mind is not spirit and that the soul, the seat of the self and the mover of the mind, is a fiction. This would have streamlined The Singularity Is Nearer considerably, giving it a philosophical consistency and allowing other Hegelians (for Kurzweil is a Hegelian to beat the band) to assent to Kurzweil’s prescriptions for a machine-man future. In other words, if people are just stuff, and computers are just stuff too, then people can become computers, as far as materialism goes, and Kurzweil would not have to explain anything beyond that.
But Kurzweil is too honest to walk down this primrose path. He is not a materialist, at least not in the traditional sense. He admits that consciousness is hard to pin down, even as he flirts with the materialist interpretation of mind. “Science tells us that complex brains give rise to functional consciousness,” Kurzweil says in the “Who Am I” chapter of The Singularity Is Nearer. “Gives rise to” is very much a weasel phrase that “science” loves to deploy, of course, although this is no fault of Kurzweil’s. Grand pianos give rise to music, but that brings us no closer to unraveling the mystery of what music is, and how it is different from noise, and why it has the power to make us cry. Kurzweil, to his credit, remains open to various other possibilities for consciousness. “What causes us to have subjective consciousness?” he asks.
Some say God. Others believe consciousness is a product of purely physical processes. But regardless of consciousness’s origin, both poles of the spiritual-secular divide agree that it is somehow sacred. How people (and at least some other animals) became conscious is just a causal argument, whether it was by a benign divinity or undirected nature. The ultimate result, however, is not open to debate—anyone who doesn’t acknowledge a child’s consciousness and capacity for suffering is considered gravely immoral.
I leave aside here the obvious contradiction between Kurzweil’s liberal politics and the pain that abortion causes for conscious children in the womb. Kurzweil continues:
Yet the cause behind subjective consciousness will soon be more than just a subject of philosophical speculation. As technology gives us the ability to expand our consciousness beyond our biological brains, we’ll need to decide what we believe generates the qualia [that is, as Kurzweil explains elsewhere in this chapter, “subjective experiences inside a mind”] at the core of our identity, and focus on preserving it. Since observable behaviors are our only available proxy for inferring subjective consciousness, our natural intuition closely matches the most scientifically plausible account: namely, that brains that can support more sophisticated behavior likewise give rise to more sophisticated subjective consciousness. Sophisticated behavior [. . .] arises from the complexity of information processing in a brain—and this in turn is largely determined by how flexibly it can represent information and how many hierarchical layers are in its network. [. . .] Whether a brain is made of carbon or silicon, the complexity that would enable it to give the outward signs of consciousness also endows it with subjective inner life.
There is much more in this chapter that is well worth reading, such as compelling ruminations on computer scientist Stephen Wolfram’s theory of irreducible complexity, on physicist Roger Penrose’s estimation of the likelihood of a universe having starting entropy low enough to enable complex life to emerge, and on the ethics of dealing with “replicants” (humanoid creations). But the passages quoted above should make it clear that Kurzweil’s anthropology is vague. He is a bad, inconsistent, wishy-washy half-materialist, yes. But that is the least of his worries. He does not know what a human being is. He also does not know what a machine is. He thinks that acting conscious translates to being conscious, and so his explanations of consciousness get badly muddled, too. Because of this, Kurzweil’s Singularity, which is a merger of humans and machines, is bound to be off. To put it the opposite way, if Kurzweil were able to give a good definition of a human and a machine, he would have to abandon his Singularity as forever out of reach. The Singularity Is Nearer is a good mirror of our current human conceits. It reminds us that for all our talk of human improvement and the centrality of consciousness, we still do not know how to treat everyone in our human family as human beings, because we don’t know what humans are. At the same time, the book’s author unwittingly proves the opposite of what he has spent much of his life predicting.
A human being, like every other thing, is not an accident, but an iteration of an organizing principle.10 We humans are a certain kind of being, and the limits of our humanity are not infinitely elastic. We do have some things in common with other living creatures, but are utterly unlike inanimate objects such as tables and rocks. “Tables” and “rocks” are also qualitatively different in a related way, in that the former are designed and made by humans, that is, are products of our minds, while the latter are mere lumps of matter, without any intervention by our minds after the first mind—Kurzweil’s “Whoever”—designed and created our shared world. A human can never merge with a thing in such a way that the human identity is lost within the thing. A prosthetic limb, a pair of eyeglasses, a well-fitting hat—these things enhance the human form and function because they are made, by humans, to work with our natures. But that we can become a prosthesis, a pair of spectacles, or an article of clothing is another question entirely, and entirely out of the question.
By the same token, a machine, even the most complex of machines, is technology, from the Greek word for “child”—that is, a product of human invention. To say that the child can give birth to the man—to say that the machine can take in the machine-maker and make of him some new thing—is to get the concept of technology precisely backwards. Nor does it help that Kurzweil does not understand information. Kurzweil sees the universe as having a spirit abroad in it, “information,” which has the power to build brains that then use information to build their own improved replacements. Information, if this were what it really is, would then perhaps be able to overcome the maker-made division and allow the machine to remake the machinist. But with information, too, we run headlong into a logical brick wall. Information is the product of mind. It does not float among the mute, dumb atoms like a Hegelian Geist waiting to be manifested. Kurzweil writes that a “Whoever” designed the universe. Whoever it was, His was an awesome mind. And we are His technology. The consciousness that we have is not information waxing reflective as it gains in complexity. The consciousness that we have is the capacity to know and understand complexity, but complexity is not consciousness’s sufficient condition. When it comes to consciousness, all we know is that we know. This knowing is a bit of information, but information is not what does the knowing. Humans—minds—know. And we know as, and because we are, human persons. It is as simple as that. We know also because, logically, there was first a mind that knew us. Complexity can mimic mind, as AI and artificial neural networks now show in abundance. But first there must be mind to mimic. That is not just information. That is the human person.
Humans are not things, not machines. We are also not information, and are not information’s by-product. Neither are the machines we build. So, no matter how hard we try, we will never be able to “merge” with computers. We can go on training computers to ape our abilities, and soon, if not already, computers will surpass us in the subtle motions of mind. But that will eternally be a derived achievement. Doubly so. First there was us, then there were computers. And before there was either, there was some greater mind, from which the orderliness of information and the ability to know what information means—that is, the mystery of consciousness at play—first came. Computers are becoming more like us in information processing, but they will never be us, as the parrot who recites phrases is never the parrot-keeper. We are a certain kind of thing, made by a mind to have a nature that we call “human.” Humanity is an exclusive club, open only to those who have human nature, which itself was thought up, somehow, by a “Whoever” whose mind “gave rise to” humans, parrots, and all the material pieces of which we are made. Humanity is limiting in that way, not infinitely malleable. There are borders to the human race, and it is precisely those borders that make us who we are and ensure that we will never “merge” with anything, except with other humans in sexual reproduction as we participate in the creation of others of our own humankind.
Kurzweil sees humanity as limiting, too, but he sees those limits as barriers to future glory waiting to be overcome. “The promise of the Singularity is to free us from all those limitations,” he writes later in the “Who Am I” chapter, referring to evolutionary features of the brain that restrict our ability to learn and cause us to hold on to “fears, traumas, and doubts,” as well as the built-in
destruction of the bodily frame that comes when every organism eventually dies. The Singularity, Kurzweil writes, will let us live lives “not [. . .] marred and cut short by the failings of our biology,” and will also let “our self-modification powers [. . .] be fully realized.” I think of other “self-modification powers” that humans have attempted, such as birth control pills, that have caused untold damage to the human body and spirit as human beings, modified, began to behave in ways entirely contrary to our nature. Transgenderism, the new birth control, has wrought misery in flesh and soul that may end up going beyond even what the pill has done to us. The Singularity, I believe, will be a similar series of disasters, a chasing after a freakish disfiguring of our human nature made unnaturally machine-like, a warping of the human person in pursuit of an impossible and ultimately anti-human dream. The Singularity is therefore a mirror for our fallen species, if we have the courage to look at ourselves for who we really are.
NOTES
1. On the Turing Test, see Luciano Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality (Oxford, UK: Oxford University Press, 2014), 132-135.
2. The namesake of Moore’s Law, Gordon Moore (born 1929), passed away in 2023.
3. Audrey Woods, “The Death of Moore’s Law: What It Means and What Might Fill the Gap Going Forward,” MIT CSAIL Alliances, n.d. https://cap.csail.mit.edu/death-moores-law-what-it-means-andwhat-might-fill-gap-going-forward
4. An appendix on “Price Performance of Computation, 1939-2023” to The Singularity Is Nearer nicely corroborates Kurzweil’s arguments about the steady and increasingly exponential increases in computing power over time.
5. See, e.g., Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined (New York, NY: Viking, 2011).
6. See Cade Metz, “Douglas Lenat, 72; His Life’s Work Was Trying to Make A.I. More Human,”
New York Times, September 5, 2023, A19.
7. Marvin L. Minsky and Seymour A. Papert, Perceptrons (Cambridge, MA: MIT Press, 1969).
8. See Robert A. Freitas, “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations,” Foresight Institute, April 2000 https://legacy.foresight.org/nano/ Ecophagy.html
9. See Bruce Sterling, “The Asilomar AI Principles,” WIRED, June 1, 2018 https://www.wired. com/beyond-the-beyond/2018/06/asilomar-ai-principles/ and Cameron Jenkins, “AI Innovators Take Pledge against Autonomous Killer Weapons,” NPR, July 18, 2018 https://www.npr. org/2018/07/18/630146884/ai-innovators-take-pledge-against-autonomous-killer-weapons
10. See Peter Redpath, The Moral Psychology of St. Thomas Aquinas: An Introduction to Ragamuffin Ethics (St. Louis, MO: Enroute), 111-125, and Peter Redpath, “Recovering Our Understanding of Philosophy and Science,” Angelicum Academy (n.d.) https://www.angelicum.net/classicalhomeschooling-magazine/fourth-issue/recovering-our-understanding-of-philosophy-and-science/
____________________________________________
Original Bio:
Jason Morgan is associate professor at Reitaku University in Kashiwa, Japan.