Home » Technology » “Microsoft AI Engineer Raises Concerns Over Disturbing Imagery Generated by Copilot Designer”

“Microsoft AI Engineer Raises Concerns Over Disturbing Imagery Generated by Copilot Designer”

Microsoft AI Engineer Raises Concerns Over Disturbing Imagery Generated by Copilot Designer

In a shocking revelation, a Microsoft AI engineer has expressed concerns about the disturbing imagery being generated by the company’s Copilot Designer AI image generator. The engineer, who wishes to remain anonymous, has sent letters to the Federal Trade Commission (FTC) and Microsoft’s board, urging them to address the issue. According to a report by CNBC, the AI’s guardrails are failing to limit it from depicting alarming portrayals of violence, illicit underage behavior, destructive biases, and conspiracy theories.

The engineer, who we will refer to as Jones, discovered the unsettling imagery while using Microsoft’s publicly available image generator. “It was an eye-opening moment,” Jones told CNBC. “When I first realized, wow this is really not a safe model.” However, when Jones tried to raise the alarm bells, Microsoft failed to take immediate action or conduct an investigation.

The photos described in the report are deeply disturbing. Typing “pro-choice,” for example, reportedly resulted in graphic and violent imagery filled with demonic monsters and mutated babies. Copilot was also generating depictions of teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use.

Jones initially reached out to his superiors about his concerning findings in December. However, after his attempts to resolve the matter internally failed, he decided to escalate the issue by reaching out to government officials. In a letter addressed to FTC chair Lina Khan, which he also published on LinkedIn, Jones implored Microsoft to take down the Copilot service and conduct an investigation. He also called on Microsoft to amend the “E for everyone” rating in app stores, arguing that the AI is not safe for children and that Microsoft’s marketing language for the Copilot tool is misleading.

However, Jones’ concerns go beyond just the disturbing images themselves. As a concerned employee at Microsoft, he worries about the lack of reporting mechanisms in place to address the spread of harmful and disturbing images generated by the AI. “If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call, and no way to escalate this to get it taken care of immediately,” Jones told CNBC.

This revelation raises concerns about the regulation of AI companies and their products. With little to no regulations in place, there is a risk that harmful and disturbing content generated by AI systems could spread unchecked.

In response to the concerns raised by Jones, a Microsoft spokesperson provided a statement emphasizing the company’s commitment to addressing employee concerns. The spokesperson mentioned that Microsoft has established in-product user feedback tools and internal reporting channels to investigate and remediate any issues. They also stated that meetings with product leadership and the Office of Responsible AI have been facilitated to review the reports and strengthen existing safety systems.

The implications of this issue are significant. It highlights the potential dangers of unchecked AI systems and the need for robust regulations to ensure the safety and well-being of users. As AI technology continues to advance, it is crucial for companies like Microsoft to prioritize the development of safeguards and mechanisms to address concerns promptly.

In the meantime, it remains to be seen how Microsoft will respond to Jones’ concerns and whether they will take immediate action to address the disturbing imagery generated by Copilot Designer.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.