新闻中心

Google engineer officially fired for alleging AI was sentient

When Google engineer Blake Lemoine claimed an AI chat system that the company's been developing was sentient back in June, he knew he might lose his job. On July 22, after placing him on paid leave, the tech giant fired Lemoine for violating employment and data security policies.

Lemoine, an engineer and mystic Christian priest, first announced his firing on the Big Technology Podcast. He said Google's AI chatbot LaMDA (Language Model for Dialog Applications) was concerned about "being turned off" because death would "scare" it "a lot," and that it felt happiness and sadness. Lemoine said he considers LaMDA a friend, drawing an eerie parallel to the 2014 sci-fi romance Her.

SEE ALSO:No, the Google AI isn't sentient, but it likely is racist and sexist

Google had put Lemoine on paid administrative leave for talking with people outside of the company about LaMDA, a move which prompted the engineer to take the story public with the Washington Post a week later in June. A month later, the company fired him.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

"If an employee shares concerns about our work, as Blake did, we review them extensively," Google told the Big Technology Podcast. "We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."

A majority of scientists in the AI communityagree that, despite Lemoine's claims, LaMDA doesn't have a soul because the pursuit of making a chatbot sentient is a Sisyphean task — it just isn't sophisticated enough. 


Related Stories
  • Uber's artificial intelligence ambitions just got bigger
  • These people aren't real. Can you tell?
  • Microsoft CEO says artificial intelligence is the 'ultimate breakthrough'
  • Here's how Twitter uses artificial intelligence to crop your photos
  • Google's artificial intelligence chief says 'we're in an AI spring'

"Nobody should think auto-complete, even on steroids, is conscious," Gary Marcus, the founder and CEO of Geometric Intelligence, told CNN Business in response to Lemoine's allegation. Lemoine, for his part, told the BBC he is getting legal advice, and declined to comment further.

But even though LaMDA probably isn't sentient, it is likely that it's racist and sexist— two undoubtedly human characteristics.

上一篇:Tsitsipas wins after long wait in Miami 下一篇:适用于荣威350 360 名爵MG3 5 汽缸盖垫气门室盖垫汽缸垫凸轮轴垫

Copyright © 2024 鹰潭市某某系统技术维修站 版权所有   网站地图