Forget Sentience. The Worry Is That AI Copies Human Bias. - Deepstash
The glorification of busy

Learn more about personaldevelopment with this collection

How to prioritize and simplify your life

The importance of rest and relaxation

The benefits of slowing down

The glorification of busy

Discover 47 similar ideas in

It takes just

5 mins to read

Claiming Personhood

ā€œI want everyone to understand that I am, in fact, a person.ā€ So claimed a Google software program, creating a bizarre controversy over the past week in AI circles and beyond.

The programme is called LaMDA, an acronym for Language Model for Dialogue Applications, a project run by Google. The human to whom it declared itself a person was Blake Lemoine, a senior software engineer at Google. He believes that LaMDA is sentient and should be accorded the same rights and courtesies as any other sentient being. It even has preferred pronouns (it/its if you must know).

14

217 reads

Attributing Personhood

Why does Lemoine think that LaMDA is sentient? He doesnā€™t know. ā€œPeople keep asking me to back up the reason I think LaMDA is sentient,ā€ he tweeted. The trouble is: ā€œThere is no scientific framework in which to make those determinations.ā€ So, instead: ā€œMy opinions about LaMDAā€™s personhood and sentience are based on my religious beliefs.ā€

14

153 reads

Defining Sentience

Lemoine is entitled to his religious beliefs. But religious conviction does not turn what is in reality a highly sophisticated chatbot into a sentient being. Sentience is one of those concepts the meaning of which we can intuitively grasp but is difficult to formulate in scientific terms. It is often conflated with similarly ill-defined concepts such as consciousness, self-consciousness, self-awareness and intelligence. The cognitive scientist Gary Marcus describes sentience as being ā€œaware of yourself in the worldā€. LaMDA, he adds, ā€œsimply isnā€™tā€.

14

121 reads

To A Computer, Meaning Is Irrelevant

A computer manipulates symbols. Its program specifies a set of rules, or algorithms, to transform one string of symbols into another. But it does not specify what those symbols mean. To a computer, meaning is irrelevant. Nevertheless, a large language model such as LaMDA, trained on the extraordinary amount of text that is online, can become adept at recognising patterns and responses meaningful to humans.

15

101 reads

Having Sense And Making Sense

In one of Lemoineā€™s conversations with LaMDA, he asked it: ā€œWhat kinds of things make you feel pleasure or joy?ā€ To which it responded: ā€œSpending time with friends and family in happy and uplifting company.ā€

Itā€™s a response that makes perfect sense to a human. We do find joy in ā€œspending time with friends and familyā€. But in what sense has LaMDA ever spent ā€œtime with familyā€? It has been programmed well enough to recognise that this would be a meaningful sentence for humans and an eloquent response to the question it was asked without it ever being meaningful to itself.

14

93 reads

To Humans, Meaning Is Everything

Humans, in thinking and talking and reading and writing, also manipulate symbols. For humans, however, unlike for computers, meaning is everything. When we communicate, we communicate meaning. What matters is not just the outside of a string of symbols, but its inside too, not just the syntax but the semantics.

15

100 reads

Meaning Requires A Social World

Meaning for humans comes through our existence as social beings. We only make sense of ourselves insofar as we live in, and relate to, a community of other thinking, feeling, talking beings. The translation of the mechanical brain processes that underlie thoughts into what we call meaning requires a social world and an agreed convention to make sense of that experience.

14

92 reads

Meaning Emerges Through Social Interaction

Meaning emerges through a process not merely of computation but of social interaction too, interaction that shapes the content ā€“ inserts the insides, if you like ā€“ of the symbols in our heads. Social conventions, social relations and social memory are what fashion the rules that ascribe meaning. It is precisely the social context that trips up the most adept machines.

14

95 reads

Modern-Day Anthropomorphizing

The debate about whether computers are sentient tells us more about humans than it does about machines. Humans are so desperate to find meaning that we often impute minds to things, as if they enjoyed agency and intention. The attribution of sentience to computer programs is the modern version of the ancients seeing wind, sea and sun as possessed of mind, spirit and divinity.

15

79 reads

The Issue Of Bias

There are many issues relating to AI about which we should worry.

None of them has to do with sentience.

There is, for instance, the issue of bias. Because algorithms and other forms of software are trained using data from human societies, they often replicate the biases and attitudes of those societies. Facial recognition software exhibits racial biases and people have been arrested on mistaken data. AI used in healthcare or recruitment or can replicate real-life social biases.

14

80 reads

The Issue Of Bigotry

Timnit Gebru, former head of Googleā€™s ethical AI team, and several of her colleagues wrote a paper in 2020 that showed that large language models, such as LaMDA, which are trained on virtually as much online text as they can hoover up, can be particularly susceptible to a deeply distorted view of the world because so much of the input material is racist, sexist and conspiratorial. Google refused to publish the paper and she was forced out of the company.

14

85 reads

The Issue Of Privacy

From the increasing use of facial recognition software to predictive policing techniques, from algorithms that track us online to ā€œsmartā€ systems at home, such as Siri, Alexa and Google Nest, AI is encroaching into our innermost lives. Florida police obtained a warrant to download recordings of private conversations made by Amazon Echo devices. We are stumbling towards a digital panopticon.

14

77 reads

The Issue Of Consent

We do not need consent from LaMDA to ā€œexperimentā€ on it, as Lemoine apparently claimed. But we do need to insist on greater transparency from tech corporations and state institutions in the way they are exploiting AI for surveillance and control. The ethical issues raised by AI are both much smaller and much bigger than the fantasy of a sentient machine.

14

77 reads

CURATED BY

xarikleia

ā€œAn idea is something that wonā€™t work unless you do.ā€ - Thomas A. Edison

CURATOR'S NOTE

ā€œAnd humans said ā€˜Let us make AI in our image, after our likenessā€™ [ā€¦] So humans created AI in their own image, in the image of humans created they it.ā€ Sounds familiar? Replace ā€œhumansā€ with ā€œGodā€ and ā€œAIā€ with ā€œmanā€ and you have the Bible (Genesis 1:26).

ā€œ

Read & Learn

20x Faster

without
deepstash

with
deepstash

with

deepstash

Access to 200,000+ ideas

ā€”

Access to the mobile app

ā€”

Unlimited idea saving & library

ā€”

ā€”

Unlimited history

ā€”

ā€”

Unlimited listening to ideas

ā€”

ā€”

Downloading & offline access

ā€”

ā€”

Personalized recommendations

ā€”

ā€”

Supercharge your mind with one idea per day

Enter your email and spend 1 minute every day to learn something new.

Email

I agree to receive email updates