BlenderBot 3, Meta’s new chatbot, takes just five days to turn racist and conspiracy-themed

BlenderBot 3, Meta’s new chatbot, takes just five days to turn racist and conspiracy-themed

Only five days have it taken the testers to set off the alarms regarding a new chatbot, this time from Meta, due to their rather worrying comments.

A few days ago, Meta released its new artificial intelligence-based chatbot, called BlenderBot 3, which is supposed to be able to hold a conversation with almost anyone on the Internet without falling into some other problem that we have already experienced.

If we remember, in July, Google fired an engineer for saying that its LaMDA chatbot was sensitive and was throwing some strange messages. They also stated that this AI could be considered racist and sexist. And in 2016, a Microsoft chatbot named Tay was taken offline within 48 hours after he started praising Adolf Hitler.

“BlenderBot 3 is designed to improve your conversational skills and confidence through feedback from people who chat with it,” Meta sells in a blog post about the new chatbot, “focusing on helpful feedback while avoiding learning from useless or dangerous responses.

Well, with all this on the table, we can already see how BlenderBot 3 was born with the idea of ​​solving all these problems and truly becoming a useful and intelligent chatbot. However, nothing is further from reality. This novelty is already making a number of false claims based on the interactions it had with real humans online.

For example, Mashable highlights some claims that Donald Trump won the 2020 US presidential election and is currently president, anti-Semitic conspiracy theories, as well as comments accusing Facebook of all its “fake news” (and that which is from Meta).

Upon learning of this, the company has responded: “Conversational AI chatbots sometimes mimic and generate unsafe, biased, or offensive comments, so we have conducted research and developed new techniques to create safeguards for BlenderBot 3. Despite this work, You can still make offensive comments, so we’re collecting feedback.”

Note that already from his blog, Meta anticipates these problems and already affirms the possibility of this happening and requires testers to do their tests consciously and responsibly. However, it seems that it is more interesting for them to explore the limits of these AIs and return to speak about a problem that we already know exists.

— This article was automatically translated from its original language —

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *