What is all the fuss about GPT-3? - Deepstash
What is all the fuss about GPT-3?

What is all the fuss about GPT-3?


126 reads

What is all the fuss about GPT-3?

👀 On my desk

Keep reading for FREE

Get The Basics of GPT-3

GPT-3 can be understood by understanding a few keywords. These are:

  1. Artificial Intelligence (AI)
  2. Machine learning (ML)
  3. Deep Learning (DL)
  4. Language Model (LM)
  5. Autoregressive (AR) Model
  6. Natural Language Processing (NLP)
  7. OpenAI

Once you get a hold of these terms, it will be easy to understand GPT-3.


18 reads

Artificial Intelligence (AI) in GPT-3

AI is when machines do things that are normally done by people like learning and problem-solving.


15 reads

Machine learning (ML) in GPT-3

Machine learning is a field of artificial intelligence that focuses on understanding and creating methods that "learn". This means that the methods get better at doing certain tasks as they get more data.


13 reads

Deep Learning (DL) in GPT-3

Deep learning is a type of machine learning where you teach a computer to learn by itself. This type of learning can be supervised, semi-supervised, or unsupervised.


13 reads

Language Model (LM) in GPT-3

A language model is a mathematical way of predicting how words will be said next, based on the probability of different word combinations.

Language models are probability distributions over a sequence of words.

They are used for many different tasks, like Part of Speech (PoS) Tagging, Machine Translation, Text Classification, Speech Recognition, Information Retrieval, and News Article Generation.


7 reads

Autoregressive (AR) Model in GPT-3

An AR model is a way to describe a random process. This is used to help understand time-varying processes in things like nature and economics.


9 reads

Natural Language Processing (NLP) in GPT-3

NLP is a way to help computers understand human language. It is a subfield of linguistics, computer science, artificial intelligence, and information engineering.


7 reads

OpenAI in GPT-3

OpenAI is a research lab that studies artificial intelligence. It was founded in 2015 and is funded by donations from people like Elon Musk and Microsoft.


9 reads

GPT-3 explanation for the layman

It stands for Generative Pre-trained Transformer 3 (GPT-3).

GPT-3 is a computer program that can create text that looks like it was written by a human. This program is gaining popularity because it can also create code, stories, and poems.

GPT-3 has gained a lot of attraction in the area of natural language processing (NLP- – an essential sub-branch of data science).


8 reads

Breakdown of Generative || Pre-trained || Transformer

Generative models are a type of statistical model which are used to create new data by understanding the relationships between different variables.

Pre-trained models are models that have already been trained on a large dataset. This allows them to be used for tasks where it would be difficult to train a model from scratch. A pre-trained model may not be 100% accurate, but it can save you time and improve performance.

Transformer model is a deep learning model that is used for tasks such as machine translation and text classification. It is designed to handle sequential data, such as text.


7 reads

Creation of GPT-3

OpenAI, a research lab in San Francisco, created a deep learning model that is 175 billion parameters (permutations and combinations) and can produce human-like text.

It was trained on large text datasets with hundreds of billions of words.


10 reads

GPT-3 is way better than its predecessor

Language models prior to GPT-3 were designed to perform a specific NLP task, such as generating text, summarizing, or classifying. First, of its kind, GPT-3 is a generalized language model that can perform equally well on a wide range of NLP tasks. 


7 reads

How does GPT-3 work?

There are two kinds of machine learning: supervised and unsupervised.

Supervised learning is when you have a lot of data that is carefully labeled so the machine can learn how to produce outputs for particular inputs.

Unsupervised learning is when the machine is exposed to a lot of data but doesn't have any labels.

GPT-3 is an unsupervised learner and it learned how to write by analyzing a lot of unlabeled data, like Reddit posts, Wikipedia articles, and news articles.


3 reads


It's time to
Read like a Pro.

Jump-start your

reading habits

, gather your



remember what you read

and stay ahead of the crowd!

Save time with daily digests

No ads, all content is free

Save ideas & add your own

Get access to the mobile app

2M+ Installs

4.7 App Rating