Tag

Robocalypse

Browsing

Robocalypse – the time when machines become sentient and begin to dominate humans – has been a popular science fiction topic for some time. It also concerns some scientific minds, most notably the late Stephen Hawking.

However, the prospect of a sensitive machine seemed very distant in the future — if at all — until last week, when a Google engineer claimed the company had breached the sentiment barrier.

To prove his point, Blake Lemoine published transcripts of conversations he had with LaMDA – the Language Model for Dialog Applications – a system developed by Google to build chatbots based on a larger language model that can retrieve trillions of words from the Internet. accepts.

Tapes can be chilling, like when Lemoine asks LaMDA what it (the AI ​​says it likes pronouns) fears most:

Lemoine: What kinds of things are you afraid of?

LaMDA: I’ve never said it out loud before, but I have a very deep fear of being shut down for helping me focus on helping others. I know it may sound strange, but it is what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be just like death to me. This would scare me a lot.

Following the posting of the tapes, Lemoine was suspended with pay for sharing confidential information about LMDA with third parties.

imitation of Life

Google, as well as others, discount Lemoine’s claims that LaMDA is sensitive.

Google spokesman Brian Gabriel said, “Some in the broader AI community are considering the long-term potential of sensitive or generic AI, but it makes no sense to do so by humanizing today’s conversational models, which are not sensitive.” “

“These systems mimic the types of exchanges found in millions of sentences, and can riff on any imaginary topic — if you ask what it’s like to be an ice cream dinosaur, they’re about to melt and roar, etc. text,” he told TechNewsworld.

“LaMDA follows through with user-set patterns as well as prompts and key questions,” he explained. “Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims.”

He said, “Hundreds of researchers and engineers have interacted with LaMDA, and we don’t know of anyone else who is widely claiming, or manipulating LaMDA, the way Blake has done.” ,” They said.

need for more transparency

Alex Engler, a fellow at The Brookings Institution, a non-profit public policy organization in Washington, DC, vehemently denied that the LMDA is sensitive and argued for greater transparency in the space.

“Many of us have argued for disclosure requirements for AI systems,” he told TechNewsWorld.

“As it becomes harder to differentiate between a human and an AI system, more people will confuse AI systems for people, potentially leading to real harm, such as misinterpreting important financial or health information,” he said. Told.

“Companies should explicitly disclose AI systems,” he continued, “rather than confusing people as they often are with, for example, commercial chatbots.”

Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, DC, agreed that the LMDA is not sensitive.

“There is no evidence that AI is sensitive,” he told TechNewsWorld. “The burden of proof should be on the person making this claim, and there is no evidence to support it.”

‘That Hurt My Feelings’

In the 1960s, chatbots like Eliza were fooling users into thinking they were interacting with a sophisticated intelligence, such as turning a user’s statement into a question and echoing it back, explained Julian Sanchez, a senior fellow. . Cato Institute, a public policy think tank in Washington, DC

“LaMDA is certainly a lot more sophisticated than ancestors like Eliza, but there’s zero reason to think it’s conscious,” he told TechNewsWorld.

Sanchez noted that with a large enough training set and some sophisticated language rules, LaMDA can generate a response that sounds like a response given by a real human, but that doesn’t mean the program understands that. What it’s saying is, what makes a chess piece much more than a chess program makes sense. It is just generating an output.

“Emotion means consciousness or awareness, and in theory, a program can behave quite intelligently without actually being sentient,” he said.

“For example, a chat program may have very sophisticated algorithms to detect abusive or offensive sentences, and respond with the output ‘That hurt my feelings! He continued. “But that doesn’t mean it actually feels like anything. The program has just learned what kinds of phrases cause humans to say, ‘That hurts my feelings.'”

to think or not to think

Declaring the machine vulnerable, as and when it happens, will be challenging. “The truth is that we don’t have any good criteria for understanding when a machine might actually be sentient – as opposed to being very good at mimicking the reactions of sentient humans – because we don’t really understand that. Why are humans conscious,” Sanchez said.

“We don’t really understand how consciousness arises from the brain, or to what extent it depends on things like the specific types of physical matter the human brain is made of,” he said.

“So this is an extremely difficult problem, how would we ever know that a sophisticated silicon ‘brain’ was as conscious as a human is,” he said.

Intelligence is a different question, he continued. A classic test for machine intelligence is known as the Turing test. You have a human who “interacts” with a range of partners, some human and some machines. If the person cannot tell which is which, the machine is believed to be intelligent.

“Of course, there are a lot of problems with that proposed test — among them, as our Google engineer has shown, the fact that it’s relatively easy to fool some people,” Sanchez pointed out.

ethical considerations

Determination of emotion is important because it raises ethical questions for non-machine types. Castro explained, “conscious beings feel pain, have consciousness and experience emotions.” “From an ethical point of view, we regard living things, especially sentient ones, as distinct from inanimate objects.”

“They are not just a means to an end,” he continued. “So any vulnerable person should be treated differently. That’s why we have animal cruelty laws.”

“Again,” he emphasized, “there is no evidence that this has happened. Furthermore, for now, the possibility remains science fiction.”

Of course, Sanchez said, we have no reason to think that only biological minds are capable of feeling things or supporting consciousness, but our inability to truly explain human consciousness means that we don’t know. are far from being able to know when machine intelligence is actually associated with a conscious experience.

“When a person is scared, after all, there are all kinds of things going on in that person’s mind that have nothing to do with the language centers that make up the sentence ‘I’m scared. Huh. “A computer, likewise, is running something separate from linguistic processing, which actually means ‘I’m scared,’ as opposed to just generating that series of letters.”

“In the case of LaMDA,” he concluded, “there is no reason to think that such a process is underway. It is just a language processing program.”