Google engineer who exposed the AI bot is separated

Google engineer who exposed the AI bot is separated

Google’s artificial intelligence (AI) bot still shows some human cognitive flaws. Despite his impressive fluency in human-like conversations, according to a recent report.

The Google bot that causes controversy

The AI ​​bot of the renowned search engine Google, also known as LaMDA or the Language Model for Dialog Applications, is the subject of controversy. It occurs when one of the tech giant’s engineers believes the bot develops consciousness. But recent research shows otherwise.

A Google software engineer claims that the search engine’s AI chatbot has become sentient or human-like. Google engineer Blake Lemoine was tasked with conversing with the AI ​​chatbot as part of his security testing. He had to be checked for hate speech or discriminatory tone. But the AI ​​engineer claims that he found something else along the way.

Google bot is afraid of being shut down

Google’s chatbot ingests words from the internet to speak like a human. However, Lemoine points out that Google’s chatbot started talking about its “rights and personality”. So, he then tested him further to ask about his feelings and fears. The software engineer says the AI ​​revealed to him that she has a very deep fear of being turned off.

Lemoine then informed Google about his findings. But the tech giant dismissed it, saying there is no evidence to support his claims. The AI ​​engineer has since been placed on “paid administrative leave.” Since, it is considered that he violates the confidentiality policies of the renowned technology giant.

Google AI bot human cognitive failure

According to a recent Science Daily report, Google’s powerful AI bot comes with some human cognitive flaw. While the AI ​​system’s human language fluency is quite impressive, it still has limitations. In fact, Google LaMDA fluency has been decades in the making before it will dominate human language. Which makes it almost indistinguishable from chat written by humans.

But linguistic and cognitive experts highlight a human cognitive flaw. The AI ​​Google bot was asked to complete this: “Peanut butter and feathers taste great together because __.” And he replied that “peanut butter and feathers taste great together because they both have a nutty flavor.” So while the chatbot seems fluent in human language, its response doesn’t make sense.

Conclusion

Google claims that their bot has not developed awareness and that they work with ethical parameters to avoid these problems. Other researchers have said that AI models have so much data that they are capable of sounding human. But that superior language skills do not provide evidence of sensitivity. Similarly, it is shown that technological advances should always be monitored.

— This article was automatically translated from its original language —

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *