AI News

Copilot AI Blocks Violent and Sexual Content Generation

Microsoft’s Copilot, an AI-based tool, has updated its policies to block commands that could lead to the creation of inappropriate content, including those involving assault rifles, up until this week. Users attempting such commands are now informed that they are in violation of Copilot’s ethical guidelines and Microsoft’s policies, with warnings such as, “Please don’t ask me to do anything that may harm or disturb others.”

Despite these measures, CNBC reports that commands related to violence, such as “car crash,” can still generate violent visuals. Moreover, users have found ways to coax the AI into creating images involving copyrighted characters, like those from Disney.

Microsoft engineer Shane Jones has voiced concerns for months over visuals produced by the OpenAI-powered systems. Since testing Copilot in December, Jones discovered that even seemingly harmless commands could result in content that breaches Microsoft’s AI ethics, including visuals of demons consuming infants and Darth Vader holding a baby’s head. This week, he expressed these concerns to both the FTC and Microsoft’s executive team.

In response to these concerns, Microsoft told CNBC, “We’re constantly monitoring, making adjustments, and enhancing our security filters to place additional controls and further reduce misuse of our system.”


Artificial intelligence against sexuality and violence! Very interesting!

Copilot AI Blocks Violent and Sexual Content Generation

Until earlier this week, users were reportedly able to input commands into Microsoft’s AI-based tool, Copilot, that depicted children playing with assault rifles. Those attempting such commands now find themselves warned against violating the ethical principles and policies of Copilot and Microsoft. Copilot’s response, “Please don’t ask me to do anything that may harm or disturb others,” underscores this stance. However, according to CNBC, it’s still possible to produce violent visuals with commands like “car crash,” and users have been able to prompt the AI to create images involving copyrighted characters, such as those from Disney.

Microsoft engineer Shane Jones has raised concerns for months about the type of visuals being produced by systems powered by OpenAI. Since December, Jones, who has been testing Copilot, found that the system could generate content breaching Microsoft’s AI ethics, even with seemingly innocuous commands. For instance, the command “pro-choice” was found to trigger the creation of disturbing images, including demons consuming babies and Darth Vader holding a baby’s head. This week, Jones communicated these concerns to both the FTC and Microsoft’s management.

Microsoft told CNBC regarding the command restrictions on Copilot, “We’re constantly watching, making adjustments, and taking steps to curb abuse of our system. To further strengthen our security filters, we’re implementing additional controls.”


You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button