Learn more about writing with this collection
How to build trust in a virtual environment
How to manage remote teams effectively
How to assess candidates remotely
A common misconception about large language models like ChatGPT and its older brethren GPT3 and GPT2 is that they are some kind of “super Googles,” or digital versions of a reference librarian, looking up answers to questions from some infinitely large library of facts, or smooshing together pastiches of stories and characters.
They don’t do any of that – at least, they were not explicitly designed to.
18
493 reads
MORE IDEAS ON THIS
A language model like ChatGPT, which is more formally known as a “generative pretrained transformer” (that’s what the G, P and T stand for), takes in the current conversation, forms a probability for all of the words in its vocabulary given that conversation, and then ch...
18
456 reads
So ChatGPT doesn’t have facts, per se. It just knows what word should come next. Put another way, ChatGPT doesn’t try to write sentences that are true. But it does try to write sentences that are plausible.
And for a machine that is d...
19
411 reads
CURATED FROM
We should first understand what something does and then judge if it does it well.
“
Read & Learn
20x Faster
without
deepstash
with
deepstash
with
deepstash
Access to 200,000+ ideas
—
Access to the mobile app
—
Unlimited idea saving & library
—
—
Unlimited history
—
—
Unlimited listening to ideas
—
—
Downloading & offline access
—
—
Personalized recommendations
—
—
Supercharge your mind with one idea per day
Enter your email and spend 1 minute every day to learn something new.
I agree to receive email updates