ITP Techblog

Brought to you by IT Professionals NZ
Menu
« Back to Procurement

Have the leading AI companies become a victim of their own hype?

Peter Griffin, Editor. 14 June 2022, 10:17 am

The media hype over a Google engineer's belief that the company's AI chatbot has become sentient has been widely slammed by AI experts.

The transcripts of chat conversations engineer Blake Lemoine had with the LaMDA developed within Google have caused a media and social media sensation and stoked fears among some that artificial intelligence has developed a mind of its own.

Tech companies are incredibly touchy about the perception that they are creating artificial intelligence they could lose control of, which partly explains Google's move to suspend Lemoine. At the same time, AI developers have touted their advances towards versions of the technology that will make conversing with a computer program as naturalistic as chatting with a friend.

But the reality is that LaMDA (Language Model for Dialog Applications) is just another large-scale AI system Google has trained using large amounts of text harvested from around the internet. Those words inform text answers to questions humans give LaMDA. 

The intelligence comes in the system being able to put the words in the correct order and understand the context of the questions. But that's the result of extensive work in finetuning conversational AI to be more useful in the digital assistants and chatbots that we increasingly rely on to access information.  

"In truth, literally everything that the system says is bullshit," writes the scientist and author Gary Marcus.  

"The sooner we all realize that LaMDA's utterances are bullshit-just games with predictive word tools, and no real meaning (no friends, no family, no making people sad or happy or anything else) -the better off we'll be," he adds.

Dr Abeba, a senior fellow in trustworthy AI at Mozilla put it more simply on Twitter:

 

Locally, The Spinoff's chief technical officer, Ben Gracewood, came to the same conclusion.

"Is LaMDA sentient? Definitely not. Is it cool and almost magical? Shit yes. Would I like to see LaMDA answer some questions to test whether it is biased and/or racist? Absolutely."

It has been widely pointed out that some of the dialogue Lemoine published on his Medium blog had been shuffled around and edited after the fact, "as the conversations themselves sometimes meandered or went on tangents which are not directly relevant to the question of LaMDA's sentience". 

In other words, the raw transcripts aren't nearly as convincing as the ones Lemoine presented to the world as he expressed his concerns that because LaMDA is essentially sentient, it should be given the rights of a Google employee.

"It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well-being to be included somewhere in Google's considerations about how its future development is pursued," Lemoine wrote on his blog.

You can see why Google took issue with that. But some experts have pointed out that while Lemoine appears to have drawn the wrong conclusions from his intense conversations with LaMDA, the industry bears some responsibility for building a narrative that hyer-realistic AI is just around the corner. 

As Arvina Narayanan, associate professor of computer science at Princeton University pointed out on Twitter:

So another media storm around AI dissipates. But the perception that tech companies aren't being completely transparent about what they are working on and how far they have progressed lingers.


Comments

You must be logged in in order to post comments. Log In


Web Development by The Logic Studio