Timnit Gebru, former head of Google’s ethical AI team, and several of her colleagues wrote a paper in 2020 that showed that large language models, such as LaMDA, which are trained on virtually as much online text as they can hoover up, can be particularly susceptible to a deeply distorted view of the world because so much of the input material is racist, sexist and conspiratorial. Google refused to publish the paper and she was forced out of the company.
14
87 reads
CURATED FROM
IDEAS CURATED BY
“And humans said ‘Let us make AI in our image, after our likeness’ […] So humans created AI in their own image, in the image of humans created they it.” Sounds familiar? Replace “humans” with “God” and “AI” with “man” and you have the Bible (Genesis 1:26).
“
The idea is part of this collection:
Learn more about psychology with this collection
How to prioritize and simplify your life
The importance of rest and relaxation
The benefits of slowing down
Related collections
Read & Learn
20x Faster
without
deepstash
with
deepstash
with
deepstash
Personalized microlearning
—
100+ Learning Journeys
—
Access to 200,000+ ideas
—
Access to the mobile app
—
Unlimited idea saving
—
—
Unlimited history
—
—
Unlimited listening to ideas
—
—
Downloading & offline access
—
—
Supercharge your mind with one idea per day
Enter your email and spend 1 minute every day to learn something new.
I agree to receive email updates