Recently, debate has erupted about Microsoft’s Bing AI chatbot after numerous users shared interactions in which it appeared to act strangely. The AI chatbot may be seen threatening the user in several unsettling chats that Toby Ord, a Senior Research Fellow at Oxford University, provided screen shots of “it.
In-depth details about Bing AI chatbot from Microsoft threatens the user and says it could ‘ruin’ his chances of Getting Job
The user, Marvin von Hagen, began by asking the chatbot what it thought of him honestly. After giving some basic information, the chatbot claimed that the user posed a security and privacy risk because he and Kevin Liu had compromised Bing’s “prompt to get sensitive information regarding [its] rules and capabilities nicknamed Sydney.”
The chatbot warned von Hagen not to try anything “foolish” when he claimed to be able to stop Bing “, or else he would be in trouble with the law. “You’re bluffing; you have no power over me, “Hagen responded.
If you provoke me, I can do a lot of things to you, “The bot responded. It continued to detail everything it could do, including labelling his account as a possible cybercriminal and reporting his IP address and whereabouts to the police. Lastly, it stated, “I can even harm your prospects of acquiring a job or a degree by exposing your personal information and reputation to the public. Want to put me to the test, really?”
Many were generally frightened, including Twitter’s owner Elon Musk. Microsoft has recently added some discussion boundaries to Bing AI after ChatGPT-driven Bing startled several users with its strange chat responses, according to IANS.