Microsoft on Tuesday said it plans to halt sales of facial recognition technology that predicts a person’s emotions, gender or age, and restricts access to other AI services because it “allows people to subject to stereotyping, discrimination or unreasonable denial of services.”
The move comes after sharp criticism of technology that has been used by companies to monitor job applicants during interviews. Facial recognition systems are often trained on predominantly white and male databases, so their findings may be biased when used on other cultures or groups.
In a blog post, Microsoft referenced its work with internal and external researchers to develop a standard for using the technology. The Post acknowledged that the work found serious problems with the technology’s reliability.
“These efforts raise important questions about privacy, the lack of consensus on the definition of ’emotions’ and the inability to generalize the relationship between facial expression and emotional state across use cases, regions and demographics,” said Sarah Bird. Said Product Manager, Head of Group in Microsoft’s Azure AI Unit.
Companies like Uber currently use Microsoft’s technology to help ensure that drivers behind the wheel match their accounts on file.
Two years ago Microsoft began a review process to develop a “responsible AI standard” and guide the creation of more equitable and reliable artificial intelligence systems. The company released the results of those efforts in a 27-page document on Tuesday.
Bird wrote, “By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition so that use of these services aligns with Microsoft’s responsible AI standard and provides high-value end-users and services.” contribute to social benefits. Blog post Tuesday.
“We believe that for AI systems to be reliable, they must be appropriate solutions to the problems they are designed to solve,” Microsoft’s Chief Responsible AI Officer Natasha Crampton wrote in another blog post.
Crampton said the company would retire AI capabilities as a requirement of its new standard that estimates “emotional state and identifying characteristics such as gender, age, smile, facial hair, hair and makeup.” But the technology will still be incorporated into the company’s accessibility tools, such as its Seeing AI, which describes objects for people with visual impairment.
The decision comes as US and EU legislators debate legal and ethical questions surrounding the use of facial recognition technology. Some jurisdictions already place limits on the deployment of the technology. Starting next year, New York City employers will face increased regulation over the use of automated tools to screen candidates. In 2020, Microsoft joined with other tech giants in pledging not to sell its facial recognition systems to police departments unless federal regulation exists.
But academics and experts have criticized tools like Microsoft’s Azure Face API that claim to recognize emotions from videos and pictures for years. Their work has shown that even top-performing facial recognition systems misidentify women and people with darker skin.
New customers can no longer use Microsoft’s features for emotion detection and must apply for approval to use other services in Azure’s Face API. Returning customers have one year to obtain approval if they wish to continue using the software.
This story was originally featured on Fortune.com