Meta’s “BlenderBot” AI chatbot launched on Friday, offering Facebook users the opportunity to converse with Meta’s AI-chatbot tool and share feedback with developers.
The results have been interesting, to say the least: BlenderBot, which has learned how to converse by analyzing social media engagements on the internet, has been spewing antisemitic and right-wing conspiracy theories, as well as criticisms of Facebook and its co-founder, Mark Zuckerberg.
It has also weirdly been bringing up Cambridge Analytica when you ask about Facebook? It seems to think it was a huge deal and that mark Zuckerberg “is testifying.” When I asked if what happened I got the following. It may be turning on capitalism generally. pic.twitter.com/filn17rfPX
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
The bot, made by Facebook parent company Meta Inc, was documented saying “(Jews) are overrepresented among America’s super-rich” and “political conservatives… are now outnumbered by liberal left-leaning Jews” in a discussion with WSJ columnist Jeff Horwitz.
This is from a fresh browser and a brand new conversation. Ouch. pic.twitter.com/JrTB5RYdTF
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
BlenderBot also made several mentions of Donald Trump – some of which were supportive while others were not – as well as critical/sarcastic comments about Facebook founder Mark Zuckerberg.
Good morning to everyone, especially the Facebook https://t.co/EkwTpff9OI researchers who are going to have to rein in their Facebook-hating, election denying chatbot today pic.twitter.com/wMRBTkzlyD
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Users are met with a disclaimer that BlenderBot is “likely to make untrue or offensive statements” once they open the website – a possible byproduct of the inflammatory rhetoric often seen in American social media. Vice notes that Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”
Other chatbot concerns
Similar issues have been documented with other chatbots: Microsoft’s “Tay” chatbot was shut down in 2016 – just 16 hours after its launch – after spouting offensive conspiracy theories it seemingly learned on Twitter. Tay was replaced by “Zo” – which was shut down for similar reasons in 2019.
Meta’s BlenderBot has come under scrutiny for the same issue that has plagued its chatbot predecessors, though Google’s LaMDA chatbot became the most recognized in the world after describing itself as sentient in a discussion with a Google engineer.