Ideas from books, articles & podcasts.
Timnit Gebru, former head of Google’s ethical AI team, and several of her colleagues wrote a paper in 2020 that showed that large language models, such as LaMDA, which are trained on virtually as much online text as they can hoover up, can be particularly susceptible to a deeply distorted view of the world because so much of the input material is racist, sexist and conspiratorial. Google refused to publish the paper and she was forced out of the company.
MORE IDEAS FROM THE SAME ARTICLE
Lemoine is entitled to his religious beliefs. But religious conviction does not turn what is in reality a highly sophisticated chatbot into a sentient being. Sentience is one of those concepts the meaning of which we can intuitively grasp but is difficult to formulate in scientific terms....
Meaning for humans comes through our existence as social beings. We only make sense of ourselves insofar as we live in, and relate to, a community of other thinking, feeling, talking beings. The translation of the mechanical brain processes that underlie thoughts into what we call meaning...
The debate about whether computers are sentient tells us more about humans than it does about machines. Humans are so desperate to find meaning that we often impute minds to things, as if they enjoyed agency and intention. The attribution of sentience to computer programs is the modern ve...
A computer manipulates symbols. Its program specifies a set of rules, or algorithms, to transform one string of symbols into another. But it does not specify what those symbols mean. To a computer, meaning is irrelevant. Nevertheless, a large language model such as LaMDA, trained...
Why does Lemoine think that LaMDA is sentient? He doesn’t know. “People keep asking me to back up the reason I think LaMDA is sentient,” he tweeted. The trouble is: “There is no scientific framework in which to make those determinations.
From the increasing use of facial recognition software to predictive policing techniques, from algorithms that track us online to “smart” systems at home, such as Siri, Alexa and Google Nest, AI is encroaching into our innermost lives. Florida police obtained a warrant to downloa...
We do not need consent from LaMDA to “experiment” on it, as Lemoine apparently claimed. But we do need to insist on greater transparency from tech corporations and state institutions in the way they are exploiting AI for surveillance and control. The ethical issue...
Meaning emerges through a process not merely of computation but of social interaction too, interaction that shapes the content – inserts the insides, if you like – of the symbols in our heads. Social conventions, social relations and social memory are what fashi...
In one of Lemoine’s conversations with LaMDA, he asked it: “What kinds of things make you feel pleasure or joy?” To which it responded: “Spending time with friends and family in happy and uplifting company.”
“I want everyone to understand that I am, in fact, a person.” So claimed a Google software program, creating a bizarre controversy over the past week in AI circles and beyond.
Humans, in thinking and talking and reading and writing, also manipulate symbols. For humans, however, unlike for computers, meaning is everything. When we communicate, we communicate meaning. What matters is not just the outside of a string of symbols, but its inside too, not just the sy...
created 13 ideas
created 14 ideas
❤️ Brainstash Inc.