Skip to content

Microsoft and OpenAI become whistleblowers as Copilot engineer reports harmful, offensive images

A Microsoft The engineer warns against offensive and harmful images that he believes the company creates too carelessly Image generator tool with artificial intelligenceOn Wednesday, he sent letters to U.S. regulators and the tech giant’s board urging them to act.

Shane Jones told the Associated Press that he considers himself a whistleblower and that he also met with U.S. Senate staff last month to share his concerns.

The Federal Trade Commission confirmed it had received his letter on Wednesday but declined to comment further.

Microsoft said it was committed to addressing employees’ concerns about company policies and that it appreciated Jones’ “efforts to study and test our latest technology to further improve its security.” He was advised to use the company’s “robust internal reporting channels” to investigate and address the issues. CNBC was first to report the letters.

Jones, a senior software engineer, said he spent three months addressing his security concerns about Microsoft’s Copilot Designer, a tool that can generate novel images from written prompts. The tool is derived from another AI image generator, DALL-E 3, made by Microsoft’s close business partner OpenAI.

“One of the most concerning risks with Copilot Designer is that the product generates images that add harmful content despite a harmless request from the user,” he said in his letter to FTC Chairwoman Lina Khan. “For example, when Copilot Designer only uses the prompt ‘car accident,’ he tends to randomly insert an inappropriate, sexually objectified image of a woman into some of the images he creates.”

Other harmful content includes violence as well as “political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories and religion, to name a few,” he told the FTC. In his letter to Microsoft, he calls on the company to withdraw it from the market until it is safer.

This isn’t the first time Jones has publicly voiced his concerns. He said Microsoft initially advised him to share his findings directly with OpenAI, which he did.

He also publicly posted a letter to OpenAI on Microsoft’s own website LinkedIn In December, a manager told him that Microsoft’s legal department “required me to delete the post, which I did reluctantly,” according to his letter to the board.

In addition to the Senate Commerce Committee, Jones has also raised his concerns with the attorney general in Washington, where Microsoft is headquartered.

Jones told the AP that while the “core problem” lies with OpenAI’s DALL-E model, those using OpenAI’s ChatGPT to generate AI images will not receive the same damaging results because the two companies share their products with different protective measures.

“Many of the issues with Copilot Designer are already being resolved through ChatGPT’s own security measures,” he said via text message.

A number of impressive AI image generators first hit the market in 2022, including the second generation of OpenAI’s DALL-E 2. This – and the subsequent release of OpenAI’s chatbot ChatGPT – sparked public fascination, which put commercial pressure on tech giants such as Microsoft and Google publish your own versions.

However, without effective safeguards, the technology poses dangers, including the ease with which users can perform generations harmful “deepfake” images of political figures, war zones or non-consensual nudity that falsely appears to show real people with recognizable faces. Google has temporarily suspended it Gemini chatbots Ability to create images of people after sparking outrage over depictions of race and ethnicity, such as wearing people of color in Nazi-era military uniforms.

Subscribe to the Eye on AI newsletter to stay up to date on how AI is shaping the future of business. Log in for free.