Home |  Contact UsSitemap


Technology III

Are we heading towards a Technological Singularity?


No one would argue that technology is progressing at a rapid pace. Blockchain, virtual reality, 3D printing, autonomous transportation – it’s difficult to keep up with developments in these and other breakthrough technologies. Major areas of innovation are arriving faster and faster.

And no one would disagree that artificial intelligence offers greater capacity than human brains in many respects, powered by machine learning and quantum computing. Machines can process information faster than we ever could. Their capacity, too, is increasing exponentially.

But will we ever reach the point where human beings are no longer in charge and to a time when we aren’t the supreme beings on Earth? And will we inevitably see these kinds if breakthroughs and developments happen on their own, powered and informed by previous breakthroughs?


What is Technological Singularity?

Those kinds of questions help frame some of the issues behind technological singularity – that point in time where rapidly accelerating technology becomes uncontrollable and inevitable. Singularity, in this case, is the tipping point where an intelligent object can design and produce improved subsequent versions of itself and other machines without human intervention, bypassing the limiting boundaries of human intelligence.

Technological singularity isn’t just a matter of machine capability. It has a time element as well. The technological singularity theory holds that these kinds of self-propelled innovation will happen at a rapidly accelerating pace. Technology breakthroughs and developments will occur over shorter and shorter periods of time until we reach the point, according to Kevin Kelly, founding and executive editor of Wired magazine, that “all the change in the last million years will be superseded by the change in the next five minutes.” Only non-humans would be able to survive this chaos, which leads us to the next element related to singularity: transhumanism.



Taken to an extreme, technological singularity can reach a point where the line between humans and machines is erased. Some say we’ll see human brains duplicated or removed and placed within never-dying machines so the “person” lives forever.

Another transhumanist scenario suggests that we’ll reach the point where imputing AI, biotechnology, and genetic manipulation within human bodies will allow a person to live forever. After all, prosthetics, implants, and artificial valves have already sent us down this path to a limited degree.

And then there’s the speculation that machines will create and program robots that will dominate the world. Their actions will be 100% focused on accomplishing goals without consideration for the externalities they create. They’ll have no hesitancy to destroy humans, the environment, and certainly our social norms if that’s what it takes.

We’ve seen these kinds of Hollywood movies pitting mankind versus machines. Thanks to their creative thinking, the human usually wins – so we can sleep at night and so our favorite actor’s character isn’t destroyed – but there’s no reason to think that would be the case if computers and robots truly progressed to that level of development.


Can it Happen?

The first person to use the word “singularity” in the technological context was John von Neumann early last century. The ideas were further fleshed out by science fiction writer Vernor Vinge in the 1980s and in Ray Kurzweil’s 2005 book, The Singularity is Near: When Humans Transcend Biology. According to Kurzweil, machine intelligence will surpass human intelligence in 2029, and the technological singularity will occur around 2045.

No doubt, we’ll certainly accomplish a lot before 2029 and many aspects of our lives will probably be unrecognizable in 24 years. We’ve already witnessed unprecedented increases in technology, along with advances in genetic engineering, synthetic biology, nootropic drugs, and direct brain-computer interfaces.

We’re accomplishing things we never even considered two decades ago, let alone thought were possible. This kind of progress won’t stop, but it doesn’t necessarily mean the advances will build to a crescendo and culminate in a moment of singularity.


Color Me Skeptical

I’m rather skeptical of many of the predictions surrounding the singularity, particularly the transhumanism part of the equation. I find I’m in good company as a skeptic because Gordon Moore also has his doubts, and Moore’s Law is often used to reinforce the prediction of singularity. Other skeptics include people like Jaron Lanier, Bruce Sterling, and Paul Allen.

I simply don’t see the emergence of a tipping point where we lose ownership of progress. In my opinion, several factors will keep this from happening:

  • As machines get smarter, the remaining challenges that would need to be solved before they can become autonomous will keep getting more complex. The delta between the two will never intersect.
  • Societies will form institutions to ensure that AI advancements are constrained by human values. The “mad scientist,” Dr. Frankenstein scenario isn’t realistic. The line between the technology that supports humanity and the technology that threatens it will be policed by smart people who will advise policymakers accordingly.
  • As futurist Martin Ford suggests, well before we get to the point of technological singularity, the technological advancements that lead up to it will displace so many workers in so many skilled professions that society would lose all desire to continue down that path.

We Don’t Need Singularity

The concepts around technological singularity are the extrapolation of our legitimate progress taken to the point of absurdity. Yes, we’re moving up an exponential growth curve, and yes, we’re heading into unchartered territory. But five minutes of innovation supplanting all known technology? Really? An evolved form of humanity instantly taking over the earth? Really? Those aren’t small steps just over the horizon, they’re chasms that won’t be crossed.

So far no one has been able to answer the fundamental question of “Why?”

So take a deep breath … and focus on the incredible progress we’ve been making, progress that will give us an unbelievable new world in which to live as we move into the future. And yes, we can enjoy AI-themed science fiction without assuming the story is inevitable. It’s possible to believe that technology will continue to rapidly transform our lives without buying into the apocalyptic visions of technological singularity and transhumanism.

Since a singularity is the point where the rules of science stop working, like the gravitational singularity of a black hole, there’s no good models for predicting what comes out the other side.

It would be a new beginning, not a future.


By Futurist Thomas Frey

Author of “Epiphany Z – 8 Radical Visions for Transforming Your Future


Share content with FFD

Features Archive


new-sampnode-logo rockefeller-logo-footer-new

Foresight For Development - Funding for this uniquely African foresight site was generously provided by Rockefeller Foundation. Email Us | Creative Commons Deed | Terms of Conditions