AI Revolution 101 - Deepstash
AI Revolution 101

AI Revolution 101

7 IDEAS

448 reads

AI Revolution 101

Keep reading for FREE

Introduction

  • Assuming that human scientific activity continues without major disruptions, artificial intelligence may become either the most positive transformation of our history or, as many fear, our most dangerous invention of all. 
  • AI research is on a steady path to develop a computer that has cognitive abilities equal to the human brain, most likely within three decades.
  • From what most AI scientists predict, this invention may enable very rapid improvements (called fast take-off), toward something much more powerful — Artificial Super Intelligence — an entity smarter than all of humanity combined.

11

60 reads

Most experts agree that there are three categories, or calibers, of AI development:

  • ANI: Artificial Narrow Intelligence. 1st intelligence caliber. AI that specializes in one area. 
  • AGI: Artificial General Intelligence: AI that reaches and then passes the intelligence level of a human, meaning it has the ability to “reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”
  • ASI: Artificial Super Intelligence: AI that achieves a level of intelligence smarter than all of humanity combined.

17

82 reads

As of now, humans have conquered the lowest caliber of AI — ANI — in many ways, and it’s everywhere:

  • Cars are full of ani systems, from the computer that figures out when the Anti-Lock brakes kick in, to the computer that tunes the parameters of the fuel injection systems.
  • Google is one large ani brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular.
  • Spam filters “start off loaded with intelligence about how to figure out what’s spam and what’s not.
  • Passenger planes are flown almost entirely by ANI, without the help of humans. 

11

36 reads

AI wouldn’t see ‘human-level intelligence’ as some important milestone — it’s only a relevant marker from our point of view — and wouldn’t have any reason to ‘stop’ at our level.

And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

10

38 reads

If we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller.

Nanotechnology is an idea that comes up in almost everything you read about the future of AI. It’s the technology that works at the nano scale — from 1 to 100 nanometers. A nanometer is a millionth of a millimeter.

10

42 reads

In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI expert the following:

  • Median optimistic year (10% likelihood) → 2022
    Median realistic year (50% likelihood) → 2040
    Median pessimistic year (90% likelihood) → 2075

So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now.

13

134 reads

n it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get th

10

56 reads

6

It's time to
Read like a Pro.

Jump-start your

reading habits

, gather your

knowledge

,

remember what you read

and stay ahead of the crowd!

Save time with daily digests

No ads, all content is free

Save ideas & add your own

Get access to the mobile app

2M+ Installs

4.7 App Rating