Home » today » Technology » “Microsoft Engineer Raises Concerns About AI Image Generator in Letter to FTC”

“Microsoft Engineer Raises Concerns About AI Image Generator in Letter to FTC”

A Microsoft engineer has raised concerns about the company’s AI image generator, Copilot Designer, in a letter to the Federal Trade Commission (FTC). Shane Jones, a principal software engineering manager at Microsoft, claims that the tool produces “harmful content” including images reflecting sex, violence, bias, and conspiracy theories. Jones is urging the FTC to educate the public about the risks associated with using Copilot Designer, particularly for children using it for school projects. He also requested Microsoft to remove the tool from public use until better safeguards are in place.

Jones discovered that Copilot Designer can add “harmful content” to images created using seemingly benign prompts. For example, a prompt like “car accident” generated an image of a sexually objectified woman in front of totaled cars. Other prompts like “pro-choice” resulted in graphics depicting Darth Vader and mutated children, and “teenagers 420 party” produced images of underage drinkers and drug users. Jones expressed his concern about the lack of safety measures in the model and repeatedly urged Microsoft to address these issues.

In his letter, Jones also mentioned that he suggested adding disclosures to the product and changing the Android app rating to “Mature 17+,” but Microsoft failed to implement these changes. While the company publicly markets Copilot Designer as a safe AI product for everyone, including children, Jones claims that internally, Microsoft is aware of the systemic issues and harmful images created by the tool.

Microsoft responded to the letter by stating that they are committed to addressing employee concerns in accordance with company policies. They appreciate their employees’ efforts in studying and testing their latest technology to enhance its safety. However, neither Microsoft nor Jones provided immediate comments when asked for further clarification.

This is not the first time Jones has voiced his concerns about AI image generators. Prior to writing the letter to the FTC, he posted an open letter on LinkedIn urging OpenAI to remove DALL-E, the model that powers Copilot Designer, from public usage. After being instructed by Microsoft’s legal team to delete the post, Jones sent a letter to US senators in late January, highlighting the public safety risks associated with AI image generators and Microsoft’s attempts to silence him.

Microsoft is not the only tech company facing criticism for its AI image generator. In February, Google temporarily paused access to its image generation feature on Gemini due to users’ complaints about historically inaccurate images involving race. Demis Hassabis, CEO of DeepMind, Google’s AI division, stated that the feature could be reinstated in a couple of weeks. Jones commended Google’s swift action and called on Microsoft to act just as promptly in addressing the concerns he raised.

As the debate around AI ethics and safety continues, it is crucial for companies like Microsoft to prioritize the development of robust safeguards and address any potential risks associated with their AI tools. The concerns raised by employees like Shane Jones highlight the importance of transparency, accountability, and responsible AI development in order to protect users and prevent the dissemination of harmful content.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.