Microsoft plans to phase out facial analysis tools in favor of ‘responsible AI’

0

Natasha Crampton, chief AI officer responsible for Microsoft, at the company’s headquarters in Redmond, Wash., June 20, 2022. Microsoft said Tuesday, June 21, 2022 that it plans to remove automated tools that predict gender, l a person’s age and emotions. state, and will restrict the use of its facial recognition tool. [Grant Hindsley/The New York Times]

For years, activists and academics have worried that facial analysis software that claims to be able to identify a person’s age, gender and emotional state could be biased, unreliable or invasive – and should not be sold.

Acknowledging some of that criticism, Microsoft said on Tuesday it plans to remove those features from its artificial intelligence service for face detection, analysis and recognition. They will cease to be available for new users this week and will be phased out for existing users within the year.

The changes are part of a push by Microsoft for tighter controls on its artificial intelligence products. After a two-year review, a Microsoft team has developed a ‘Responsible AI Standard’, a 27-page document that sets out requirements for AI systems to ensure they won’t impact harmful to society.

The requirements include ensuring that the systems provide “valid solutions to the problems they are designed to solve” and “similar quality of service for identified demographic groups, including marginalized groups”.

Prior to release, technologies that would be used to make important decisions about a person’s access to employment, education, health care, financial services, or a life opportunity are subject to review by a team led by Natasha Crampton, head of AI at Microsoft. .

Microsoft grew increasingly concerned about the emotion recognition tool, which labeled someone’s expression as anger, contempt, disgust, fear, happiness, neutrality, sadness, or surprise.

“There is a huge amount of cultural, geographic and individual variation in how we express ourselves,” Crampton said. This has led to reliability issues, as well as larger questions of whether “facial expression is a reliable indicator of your internal emotional state,” she said.

Age and gender analysis tools being phased out – along with other tools for detecting facial attributes such as hair and smile – could be useful in interpreting visual images for people who are blind or visually impaired, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Crampton said.

In particular, she added, the system’s so-called gender classifier was binary, “and that’s not in line with our values.”

Microsoft will also put new controls on its facial recognition feature, which can be used to perform identity checks or search for a specific person. Uber, for example, uses its app software to verify that a driver’s face matches the ID on file for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool will need to request access and explain how they plan to deploy it.

Users will also need to apply and explain how they will use other potentially abusive AI systems, such as Custom Neural Voice. The service can generate a human voiceprint, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read their audiobooks in languages ​​they don’t speak. not.

Due to the possible misuse of the tool – to make it look like people said things they didn’t say – responders must go through a series of steps to confirm that the use their voice is permitted, and recordings include watermarks detectable by Microsoft.

“We’re taking real action to live up to our AI principles,” said Crampton, who worked as an attorney at Microsoft for 11 years and joined the Ethical AI Group in 2018. “It’s going to be a huge journey.”

Microsoft, like other tech companies, has had some stumbles with its artificially intelligent products. In 2016, he released a Twitter chatbot called Tay that was designed to learn “conversational understanding” from users he interacted with. The bot soon began spouting racist and offensive tweets, and Microsoft had to take it down.

In 2020, researchers found that text-to-speech tools developed by Microsoft, Apple, Google, IBM and Amazon worked less well for black people. Microsoft’s system was the best of the bunch, but misidentified 15% of words for whites, compared to 27% for blacks.

The company had collected various voice data to train its AI system, but hadn’t realized how diverse language could be. So he hired a sociolinguistics expert from the University of Washington to explain the language varieties Microsoft needed to know. He went beyond demographics and regional variety in how people speak in formal and informal settings.

“Thinking of race as a determining factor in how someone speaks is actually a little misleading,” Crampton said. “What we learned from consulting the expert is that in fact a huge range of factors affect language variety.”

Crampton said the journey to correct this disparity between speech and text has helped inform the directions set out in the company’s new standards.

“This is a critical period for setting standards for AI,” she said, pointing to proposed regulations from Europe setting rules and limits on the use of intelligence artificial. “We hope that we can use our standard to try to contribute to the bright and necessary discussion that needs to take place about what standards technology companies should be held to.”

A heated debate about the potential harms of AI has been going on for years in the tech community, fueled by errors and mistakes that have real-world consequences for people’s lives, like the algorithms that determine whether people get or not social benefits. Dutch tax authorities mistakenly withdrew childcare allowances from needy families when a faulty algorithm penalized people with dual nationality.

Automated face recognition and analysis software has been particularly controversial. Last year, Facebook shut down its decade-old system for identifying people in photos. The company’s vice president of artificial intelligence cited the “numerous concerns about the place of facial recognition technology in society.”

Several black men have been wrongly arrested after flawed facial recognition matches. And in 2020, at the same time as the Black Lives Matter protests following the police killing of George Floyd in Minneapolis, Amazon and Microsoft issued moratoriums on the use of their facial recognition products by police in the United States. , saying clearer laws on its use were needed.

Since then, Washington and Massachusetts have passed regulations requiring, among other things, judicial oversight of police use of facial recognition tools.

Crampton said Microsoft has considered starting to make its software available to police in states with existing laws, but has decided not to, at this time. She said that could change as the legal landscape changes.

Arvind Narayanan, a professor of computer science at Princeton and a leading AI expert, said companies could move away from technologies that analyze the face because they were “more visceral, as opposed to various other types of AI that could be doubtful but which we do not”. necessarily feel in our bones.

Companies may also realize that, at least for now, some of these systems don’t have such commercial value, he said. Microsoft could not say how many users it has for the facial analysis features it is getting rid of. Narayanan predicted that companies would be less likely to abandon other invasive technologies, such as targeted advertising, which profiles people to choose the best ads to show them, because they were a “cash cow”.

[This article originally appeared in The New York Times.]

Share.

Comments are closed.