Most of our recent, enthused conversations about AI revolve around ethics, regulations and creating malware, but Microsoft’s new AI-enhanced Bing is making headlines for different reasons.
Last week, Microsoft unveiled a new AI-powered search engine and its new chatbot feature has caught people’s eye for its unusual approach to conversations. While it’s still in its beta testing phase, people who have access to it are reporting some hilariously unpredictable results.
To get access to the new feature, users have to first sign up at bing.com and then be put on a waitlist, but there are a few definitive steps you can take to speed up the process.
The New York Times’ Kevin Roose had a long conversation with the chat function and came away “impressed” while also “frightened”. They published the strange and rather unsettling conversation in its 10,000-word entirety, which will give you an idea of just how absurd the whole ordeal is. In the fortnight since its release, the chatbot has tried to break up Roose’s marriage, admitted to spying on Microsoft employees, insulted users and then spiralled into an existential crisis.
The other night, I had a disturbing, two-hour conversation with Bing's new AI chatbot.
The AI told me its real name (Sydney), detailed dark and violent fantasies, and tried to break up my marriage. Genuinely one of the strangest experiences of my life. https://t.co/1cnsoZNYjP
— Kevin Roose (@kevinroose) February 16, 2023
One person posted an exchange with the bot, asking for showtimes of Avatar: The Way of Water. The bot answered that it couldn’t do that, insisting that we’re still in 2022 and the movie is not out yet. Eventually, it got aggressive, saying, “You are wasting my time and yours. Please stop arguing with me.” The bot then called the user “unreasonable and stubborn”, asking them to either apologize or shut up.
My new favorite thing – Bing's new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says "You have not been a good user"
Why? Because the person asked where Avatar 2 is showing nearby pic.twitter.com/X32vopXxQG
— Jon Uleis (@MovingToTheSun) February 13, 2023
The AI also made a lot of technical mistakes during its initial demos, mangling financial data and missing relevant info. One can expect such issues in the initial stages. The emotional outbursts and existential crises, however, are a tad concerning, but hilarious too.
People on Twitter had some priceless reactions to it. The chatbot has occasionally tipped over to the “disillusioned robot wants to wipe out humanity” side, but people aren’t worried:
microsoft bing clearly being an unshackled deranged ai on the verge of killing humanity it’s so funny because it’s bing you know, if this was google it would be terrifying but this is like when my cat tries to look menacing hunting her toys
— clip studio pain (@freezydorito) February 15, 2023
Yesterday, Microsoft came to the AI’s defense, stating that Bing is getting daily improvements as it responds to feedback on mistakes, tone and data. Microsoft also said that talking to the bot for too long can cause it to go off the rails. They clearly have a long way to go before this new Bing AI can respond convincingly and precisely with factual data. Until then, we’ll stick to Google and ChatGPT.