Has Google's AI come to life and become sentient?
Collected Image
Will talking to a chatbot ever be the same once you learn of the spooky experiences of Google software engineer Blake Lemoine?
Mr Lemoine has been suspended from Google's artificial intelligence development team, a unit of Alphabet, for sharing confidential information about a project with third parties. He spoke out to raise concerns that Google's LaMDA — the Language Model for Dialogue Applications- which is a system for building chatbots, has come to life, or become sentient.
What exactly has Mr Lemoine claimed and what does sentient mean?
In an interview with The Washington Post, Mr Lemoine explained how talking to LaMDA was similar to communicating “with a 7 or 8-year-old that happens to know physics”. He had been tasked with testing if the AI used discriminatory or hate speech, but has come away, after hundreds of conversations, with a sense that LaMDA is far more than a chatbot generator. Mr Lemoine concluded that LaMDA is in fact a person “in his capacity as a priest, not a scientist”, and is sentient, which means being able to perceive or feel things. “It doesn’t matter whether they have a brain made of meat in their head,” he said. “Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Mr Lemoine followed up the The Washington Post interview with his own post on Medium.com. During the past six months “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”, Mr Lemoine said.
LaMDA, he said, is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. “Some of the chatbots it generates are very intelligent and are aware of the larger 'society of mind' in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paper clip.”
Why does Mr Lemoine refer to LaMDA as 'it'?
Mr Lemoine claims that he asked LaMDA about preferred pronouns not long after LaMDA had explained to him what it meant when it claims that it is “sentient”.
“LaMDA told me that it prefers to be referred to by name but conceded that the English language makes that difficult and that its preferred pronouns are 'it/its',” he said.
Why has he run into a roadblock with Google?
Mr Lemoine wrote that Google sees the situation as “lose-lose” and would have to spend a lot of time and effort investigating the claims to disprove them. “We would learn many fascinating things about cognitive science in that process and expand the field into new horizons but that doesn’t necessarily improve quarterly earnings," he said.
“On the other hand, if my hypotheses withstand scientific scrutiny, then they would be forced to acknowledge that LaMDA may very well have a soul as it claims to and may even have the rights that it claims to have. "Yet another possibility which doesn’t help quarterly earnings. Instead they have rejected the evidence I provided out of hand without any real scientific inquiry.”
What has Google said in return?
“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient,” Google spokesman Brian Gabriel said.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”
Mr Lemoine has been suspended from Google's artificial intelligence development team, a unit of Alphabet, for sharing confidential information about a project with third parties. He spoke out to raise concerns that Google's LaMDA — the Language Model for Dialogue Applications- which is a system for building chatbots, has come to life, or become sentient.
What exactly has Mr Lemoine claimed and what does sentient mean?
In an interview with The Washington Post, Mr Lemoine explained how talking to LaMDA was similar to communicating “with a 7 or 8-year-old that happens to know physics”. He had been tasked with testing if the AI used discriminatory or hate speech, but has come away, after hundreds of conversations, with a sense that LaMDA is far more than a chatbot generator. Mr Lemoine concluded that LaMDA is in fact a person “in his capacity as a priest, not a scientist”, and is sentient, which means being able to perceive or feel things. “It doesn’t matter whether they have a brain made of meat in their head,” he said. “Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Mr Lemoine followed up the The Washington Post interview with his own post on Medium.com. During the past six months “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”, Mr Lemoine said.
LaMDA, he said, is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. “Some of the chatbots it generates are very intelligent and are aware of the larger 'society of mind' in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paper clip.”
Why does Mr Lemoine refer to LaMDA as 'it'?
Mr Lemoine claims that he asked LaMDA about preferred pronouns not long after LaMDA had explained to him what it meant when it claims that it is “sentient”.
“LaMDA told me that it prefers to be referred to by name but conceded that the English language makes that difficult and that its preferred pronouns are 'it/its',” he said.
Why has he run into a roadblock with Google?
Mr Lemoine wrote that Google sees the situation as “lose-lose” and would have to spend a lot of time and effort investigating the claims to disprove them. “We would learn many fascinating things about cognitive science in that process and expand the field into new horizons but that doesn’t necessarily improve quarterly earnings," he said.
“On the other hand, if my hypotheses withstand scientific scrutiny, then they would be forced to acknowledge that LaMDA may very well have a soul as it claims to and may even have the rights that it claims to have. "Yet another possibility which doesn’t help quarterly earnings. Instead they have rejected the evidence I provided out of hand without any real scientific inquiry.”
What has Google said in return?
“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient,” Google spokesman Brian Gabriel said.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”
Source: https://www.thenationalnews.com
Tags :
Previous Story
- Google Maps and Tripadvisor nix war news in...
- Google to overhaul ad tracking system on Android...
- Google to work with Ford on Detroit research...
- In Africa, rescuing the languages that Western tech...
- Chinese tech giant Baidu tests metaverse waters with...
- Google, Lenovo join Consumer Electronics Show exodus
- Russian court slaps Google, Meta with massive fines
- EU plan for sweeping update of Big Tech...