Meta's new chatbot claims Donald Trump 'will always be president'
Collected Image
Meta's new artificial intelligence chatbot has stated that Donald Trump won the 2020 US election, while also making other wild claims.
Facebook's parent company released BlenderBot 3 on August 5 to users in the US. Meta said it was a "state-of-the-art conversational agent that can converse naturally with people, who can then provide feedback to the model on how to improve its responses". "Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organised workshops and developed new techniques to create safeguards for BlenderBot 3," Meta said. "Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better."
In a conversation with a reporter from The Wall Street Journal, the bot claimed Mr Trump was still the US president and “always will be", Bloomberg reported. The bot told an Insider reporter that Meta chief executive Mark Zuckerberg is “too creepy and manipulative”, the report said.
Elsewhere, the BBC reported that the chatbot said "our country is divided and he [Mr Zuckerberg] didn't help that at all". "His company exploits people for money and he doesn't care. It needs to stop," it said.
Meta perceives BlenderBot 3 as significantly advanced compared with other publicly available chatbots, although the company says "it's not at a human level".
It recognises that the chatbot is "occasionally incorrect, inconsistent and off-topic". But it has found that only 0.16 per cent of BlenderBot’s responses to people were flagged as rude or inappropriate.
AI chatbots were recently in focus when Google senior software engineer Blake Lemoine said the company's Language Model for Dialogue Applications (LaMDA), a system for building chatbots, had become sentient.
Mr Lemoine explained how talking to LaMDA was similar to communicating “with a 7 or 8-year-old that happens to know physics”. Google responded by saying that the evidence does not support his claims and Mr Lemoine was fired for breaching company policy regarding confidential matters.
Facebook's parent company released BlenderBot 3 on August 5 to users in the US. Meta said it was a "state-of-the-art conversational agent that can converse naturally with people, who can then provide feedback to the model on how to improve its responses". "Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organised workshops and developed new techniques to create safeguards for BlenderBot 3," Meta said. "Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better."
In a conversation with a reporter from The Wall Street Journal, the bot claimed Mr Trump was still the US president and “always will be", Bloomberg reported. The bot told an Insider reporter that Meta chief executive Mark Zuckerberg is “too creepy and manipulative”, the report said.
Elsewhere, the BBC reported that the chatbot said "our country is divided and he [Mr Zuckerberg] didn't help that at all". "His company exploits people for money and he doesn't care. It needs to stop," it said.
Meta perceives BlenderBot 3 as significantly advanced compared with other publicly available chatbots, although the company says "it's not at a human level".
It recognises that the chatbot is "occasionally incorrect, inconsistent and off-topic". But it has found that only 0.16 per cent of BlenderBot’s responses to people were flagged as rude or inappropriate.
AI chatbots were recently in focus when Google senior software engineer Blake Lemoine said the company's Language Model for Dialogue Applications (LaMDA), a system for building chatbots, had become sentient.
Mr Lemoine explained how talking to LaMDA was similar to communicating “with a 7 or 8-year-old that happens to know physics”. Google responded by saying that the evidence does not support his claims and Mr Lemoine was fired for breaching company policy regarding confidential matters.
Source: https://www.thenationalnews.com