• Assuming that human scientific activity continues without major disruptions, artificial intelligence may become either the most positive transformation of our history or, as many fear, our most dangerous invention of all. 
  • AI research is on a steady path to develop a computer that has cognitive abilities equal to the human brain, most likely within three decades.
  • From what most AI scientists predict, this invention may enable very rapid improvements (called fast take-off), toward something much more powerful — Artificial Super Intelligence — an entity smarter than all of humanity combined.


AI Revolution 101

Most experts agree that there are three categories, or calibers, of AI development:

  • ANI: Artificial Narrow Intelligence. 1st intelligence caliber. AI that specializes in one area. 
  • AGI: Artificial General Intelligence: AI that reaches and then passes the intelligence level of a human, meaning it has the ability to “reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”
  • ASI: Artificial Super Intelligence: AI that achieves a level of intelligence smarter than all of humanity combined.


As of now, humans have conquered the lowest caliber of AI — ANI — in many ways, and it’s everywhere:

  • Cars are full of ani systems, from the computer that figures out when the Anti-Lock brakes kick in, to the computer that tunes the parameters of the fuel injection systems.
  • Google is one large ani brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular.
  • Spam filters “start off loaded with intelligence about how to figure out what’s spam and what’s not.
  • Passenger planes are flown almost entirely by ANI, without the help of humans. 


AI wouldn’t see ‘human-level intelligence’ as some important milestone — it’s only a relevant marker from our point of view — and wouldn’t have any reason to ‘stop’ at our level.

And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.


If we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller.

Nanotechnology is an idea that comes up in almost everything you read about the future of AI. It’s the technology that works at the nano scale — from 1 to 100 nanometers. A nanometer is a millionth of a millimeter.


In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI expert the following:

  • Median optimistic year (10% likelihood) → 2022
    Median realistic year (50% likelihood) → 2040
    Median pessimistic year (90% likelihood) → 2075

So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now.


n it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get th


Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.