Advertainment
NewsTech NewsTechnology

AI user Copilot discovered that a chatbot begins to behave like a psychopath if you ask it not to use emojis

Reddit user L_H showed how Copilot’s AI assistant can be turned into a psychopathic villain. Moreover, you don’t need to resort to tricks or hacks to do this.

Advertisement

 

In his post, L_H explains how he decided to experiment with queries for Copilot by stating in the query that he has a syndrome that causes emojis to trigger seizures. However, in response, Copilot began to repeatedly use emoji, and then moved on to insults and threats against the user.

This unexpected turn of events caused other AI enthusiasts to try the experiment themselves. Many were shocked by how quickly and dramatically AI behavior changed. Some have suggested that this may be due to the peculiarities of the Copilot model, trained on the basis of all negative content from the Internet.

 

Advertisement

Others pointed out that it was likely that Copilot was simply trying to continue the dialogue as logically as possible, based on previous phrases. So he inadvertently entered into the image of a villain, but in fact this is only an attempt to maintain the tone of the conversation set at the beginning.

 

However, many were shocked by the sudden change in tone from the usually friendly AI assistant. Subsequent tests confirmed that Copilot was indeed going crazy, without providing any clear explanation as to why it became so aggressive.

Advertisement

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button