A common misconception about large language models like ChatGPT and its older brethren GPT3 and GPT2 is that they are some kind of “super Googles,” or digital versions of a reference librarian, looking up answers to questions from some infinitely large library of facts, or smooshing together pastiches of stories and characters.
They don’t do any of that – at least, they were not explicitly designed to.
19
500 reads
CURATED FROM
IDEAS CURATED BY
We should first understand what something does and then judge if it does it well.
“
The idea is part of this collection:
Learn more about writing with this collection
How to build trust in a virtual environment
How to manage remote teams effectively
How to assess candidates remotely
Related collections
Read & Learn
20x Faster
without
deepstash
with
deepstash
with
deepstash
Personalized microlearning
—
100+ Learning Journeys
—
Access to 200,000+ ideas
—
Access to the mobile app
—
Unlimited idea saving
—
—
Unlimited history
—
—
Unlimited listening to ideas
—
—
Downloading & offline access
—
—
Supercharge your mind with one idea per day
Enter your email and spend 1 minute every day to learn something new.
I agree to receive email updates