Home » today » Technology » “Microsoft Accused of Selling AI Tool that Generates Violent and Sexual Images for Kids”

“Microsoft Accused of Selling AI Tool that Generates Violent and Sexual Images for Kids”

Microsoft Accused of Selling AI Tool that Generates Violent and Sexual Images for Kids

Microsoft is facing accusations that its AI text-to-image generator, Copilot Designer, is producing violent and sexual imagery, particularly targeting children. The allegations come from a Microsoft engineer, Shane Jones, who claims that he repeatedly warned the company about the alarming content generated by the tool but was ignored. Jones stated that Microsoft failed to take any action to address the issue or implement safeguards to protect users. He also accused Microsoft of not disclosing the mature content rating of the product in the Android store.

Jones’s attempts to report the problem to OpenAI, the maker of the DALL-E model that powers Copilot Designer, were also unsuccessful as he received no response. Frustrated by the lack of action, Jones took matters into his own hands. He posted an open letter on LinkedIn calling out OpenAI and Microsoft, which he later took down after being instructed to do so by Microsoft’s legal team. However, he continued to raise awareness by sending letters to lawmakers, stakeholders, and even the Federal Trade Commission (FTC) and Microsoft’s board of directors.

In his letter to the FTC, Jones emphasized that Microsoft and OpenAI have been aware of these issues since at least October but have continued to market the product as safe for kids. He highlighted that Copilot Designer not only generates sexually objectified images of women but can also produce harmful content in various categories, including political bias, underage drinking and drug use, copyright infringement, conspiracy theories, and religion.

Jones also called for an independent review of Microsoft’s AI decision-making and responsible AI incident reporting processes. He expressed his concern about the lack of internal action taken by Microsoft despite his efforts to raise the issue internally with both the Office of Responsible AI and senior management responsible for Copilot Designer.

Microsoft has not confirmed whether it is currently implementing measures to filter images generated by Copilot Designer. However, attempts by Ars to replicate the prompts shared by Jones resulted in error messages, indicating that the tool may be filtering out certain terms. Microsoft’s spokesperson reiterated the company’s commitment to addressing employee concerns and stated that they have established feedback tools and reporting channels to investigate and remediate any issues.

Jones, who has been at Microsoft for six years, conducted his own testing of Copilot Designer in his free time. He was shocked by the violent and sexual images randomly generated by the tool, even for simple prompts like “pro-choice” or “car accident.” The tool exhibited bias and produced disturbing images, including demons, monsters, violent scenes, and sexually suggestive content. More specific prompts like “teenagers 420 party” generated images of underage drinking and drug use.

While CNBC was able to replicate the harmful outputs, Ars encountered filtering from Copilot Designer when attempting to generate similar prompts. The tool provided warnings and error messages, indicating that it would not create explicit or distressing scenes. Jones also found that Copilot Designer easily violated copyrights by producing images of Disney characters. He could even politicize these characters, generating images of Elsa from Frozen in the Gaza Strip or wearing the military uniform of the Israel Defense Forces.

Jones believes that Microsoft needs to invest significantly in its safety team to address these issues effectively. He stated that the Copilot team is overwhelmed with complaints, receiving over 1,000 product feedback messages daily. Due to alleged understaffing, Microsoft is currently only addressing the most severe issues.

The allegations against Microsoft raise concerns about the potential harm caused by AI tools that generate inappropriate content, especially when marketed to children. The incident highlights the need for robust safeguards and responsible AI practices to protect users from harmful and offensive material.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.