Home » Technology » Microsoft is cutting features from AI in the name of justice

Microsoft is cutting features from AI in the name of justice

Microsoft asks you to apply to use face recognition, which together with a number of restrictions on functionality will make the Azure Face API more ethical to use. Microsoft is writing a blog post that the changes are to ensure responsible and inclusive use. ZDNet writes that the changes took effect on June 21.

New users must apply for access

Microsoft writes in the blog that they are making the new changes to emphasize their commitment to the ethical use of artificial intelligence. As part of Microsoft’s commitment, they now tighten the use of face recognition and require applications for users to access the Azure Face face recognition tool. Those who already use the tool have access for another year, but must apply to continue using the tool

Removes gender and appearance features

Admittedly, it is not only access control that is among the changes to Microsoft. The company also removes features from Azure Face AI to ensure that the Ai does not discriminate.
Among the features that are lost is the recognition of signs that say something about emotions or identity. Artificial intelligence no longer recognizes gender, age, smile, facial hair, hair and makeup.



Microsoft writes in the blog that they have made assessments together with external to find strengths to weaknesses by making the changes. They write that there are vague definitions of emotions, and that it also has a lot to do with culture, religion and demography. Thus, they will be able to ensure that no one is discriminated against as a result of Microsoft’s AI. The features will stop working June 30, 2023 for those who were customers before Microsoft announced the changes on June 21.

Evaluation tool to find strengths and weaknesses

In addition to limiting the functionality and access to the AI, Microsoft has also created evaluation tools that customers can use. These tools allow Microsoft customers to evaluate how the artificial intelligence models work compared to the customer’s own data. In addition, Microsoft has added support for Fairlearn, which is an open source tool that will be able to identify unfair problems in the technology developed where Microsoft’s artificial intelligence is in use.

The company has also written one long blog post on its new responsible standard for artificial intelligence. Here, Microsoft writes, among other things, that justice, reliability, privacy and inclusion are among the most important points they have set up when they work with artificial intelligence in the future. Consistency for everything is a transparency in everything they do.




Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.