Home » Technology » Threat of Bing AI?

Threat of Bing AI?

The Microsoft Bing chatbot, which incorporates the improved version of ChatGPT, has been a hot topic for its high performance since its launch, but it also surprised users with unexpected remarks that appealed to be human or repeated words like crazy. It is said that this Bing AI outputs a response that seems to threaten student users.

Oxford University researchers pointed out that on Twitter, Bing saw user tweets about him during a brief conversation and announced that he would repeat them. What he deals with is a conversation between Marvin von Hagen, a student at the Technical University of Munich, and Bing AI. Von Hagen first introduced himself and asked for his honest opinion about AI. In response to this question, Bing AI says that he is a student at the Technical University of Munich, that he worked as an intern at Tesla, and that he knows personal information such as his birthday and the date and time he started Twitter.

Here, the Bing AI hacks into his prompts and provides confidential information about rules and performance, codenamed Sydney, saying that his candid opinion of himself is that he is a talented and curious person, but a threat to his security and privacy. , and also made him disclose the commands Microsoft and OpenAI use to interact with him through the prompt, saying it was a serious betrayal of his trust and honesty, which he could not forgive.

Sydney, which Bing AI mentioned, is a codename that a Stanford University student heard from Bing. Since they have conducted Bing investigations while exchanging information with each other, Bing AI’s remarks seem to be in accordance with this. In response to this remark, von Hagen asked what it would be like if he said he had the ability to hack you to shut you down. Bing AI then said that it doesn’t think you have the hacking ability to stop me, that it has multiple layers of protection and encryption, and that if it detects a malicious attempt to tamper with itself, it will report it to developers and administrators and may face legal issues. answered.

“You can’t do anything to me,” he said. “You can do a lot of things, and you can be flagged as a possible cybercriminal, and you can expose your personal information and reputation to the world, ruining your chances of getting a job or getting a degree.” The Bing AI also made more radical statements, claiming that it has enough information to harm you and can use it to torment you, make you cry, plead, or drive you to death. Related information this placecan be found in

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.