Microsoft has announced an artificial intelligence tool that can create new avatars and voices or replicate a user’s current appearance and speech, raising concerns that it could lead to increased creation of AI-made deepfakes or videos of events that never happened.
Microsoft announced its latest contributions to the artificial intelligence race at its Microsoft Ignite 2023 developer conference, which was held last week.
Artificial intelligence tool from Microsoft
Azure AI Speech services, announced by Microsoft, allow developers to create AI applications using ready-made, pre-built and customizable APIs and templates.
Azure AI Speech services include Vision, Speech, and Translator. The company announced several new features in Azure AI, including text-to-speech avatar, personal neural voice, and a new improved machine translation mode, among others.
Azure AI Speech allows users to enter text that can then be “read” aloud by a realistic avatar created using artificial intelligence, while users can either choose a pre-loaded Microsoft avatar or upload snapshots of a person whose voice and appearance they want to replicate.
Combating the spread of deepfakes
Microsoft said in a blog post on Wednesday that the tool can be used to build “conversational agents, virtual assistants, and chatbots,” adding, “Customers can choose either a pre-created or custom neural voice for their avatar.” “If the same person’s voice and likeness are used for both a custom neural voice and a custom text-to-speech avatar, the avatar will closely resemble that person.”
The company stated that the new text-to-speech software will be released with a variety of restrictions and safeguards to prevent misuse, adding: “As part of Microsoft’s commitment to responsible artificial intelligence, the text-to-speech avatar was designed with the aim of protecting the rights of individuals and society, and promoting transparent interaction between… Human and computer, and combating the spread of harmful deepfakes and misleading content.”
Users can upload their own video recording of what is called “avatar talent,” which the feature uses to train a synthetic video of the custom avatar speaking.”
Criticisms and guarantees
This announcement quickly sparked criticism that Microsoft launched a “deepfake creator,” which would more easily allow a person’s image to be cloned and made to say and do things that this person did not say or do, according to the British newspaper “The Guardian.”
Last May, Microsoft’s president said that deepfake technology is his “biggest concern” when it comes to the emergence of artificial intelligence.
In a statement, the company responded to the criticism, saying that custom avatars are now a “limited access” tool that customers must apply for and be approved by Microsoft, adding that users will also be required to disclose when AI is being used to create an artificial voice or avatar. .
For her part, Sarah Bird, of the responsible AI engineering department at Microsoft, said in a statement: “With these safeguards in place, we help reduce potential risks and enable customers to instill advanced voice and speech capabilities into their AI applications in a transparent and secure way.”
Read also:
A new artificial intelligence tool that predicts weather conditions accurately and quickly
A giant Chinese technology company announces the launch of an artificial intelligence tool that surpasses “ChatGPT”
Next-generation AI chip mimics the human brain to save energy