black and white photo of human hand and robot hand

The ghost in the machine: Maybe we should talk about AI?

What have we done!

All of those movies and TV shows about AI gaining the advantage on human beings didn’t teach us not to do this far?

Before we over-react, let’s just react: Google has placed a man on leave after he publicly claimed that AI was not sentient, and that it acts like a 7 or 8 year old.

So just dwell on that for a bit.

Here is the story as best we can summarize from varied reports..

Blake Lemoine, 41, a senior software engineer at Google has been testing Google’s artificial intelligence tool called LaMDA

  • Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient 
  • After presenting his findings to company bosses, Google disagreed with him
  • Lemoine then decided to share his conversations with the tool online 
  • He was put on paid leave by Google for violating confidentiality

The UK DAILY MAIL reports:

A senior software engineer at Google who signed up to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.

During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various of scenarios through which analyses could be made.

They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech. 

MORE:

‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,’ he told the Washington Post.  

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims. 

He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.

‘Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,’ Lemoine tweeted on Saturday. 

‘Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,’ he added in a follow-up tweet.  

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

For your discernment, you can read the full conversation here that made Lemoine think AI has finally become aware of itself: