Three years after Facebook shut down Meta‘s facial recognition software amid a wave of criticism from privacy advocates and regulators, the social media giant said Tuesday it is testing the service again as part of a crackdown on “celebrity bait” scams .
Meta will involve about 50,000 public figures in a test that will automatically compare their Facebook profile photos with images used in suspected scam ads. If the images match and Meta believes the ads are scams, they will be blocked.
Celebrities will be notified of their participation and can opt out if they do not wish to participate, the company said.
The company plans to roll out the study globally starting in December, except for some large countries where it does not have regulatory approval, such as. B. Britain, the European Union, South Korea and the US states of Texas and Illinois, it added.
Monika Bickert, Meta’s vice president of content policy, said in a briefing with reporters that the company is focusing on public figures whose likenesses have been used in fraudulent ads.
“The idea behind it is: We offer them as much protection as possible. They can opt out if they want, but we want to provide them with that protection and make it easy for them,” Bickert said.
The test shows that the company is trying to balance using a potentially invasive technology to address regulators’ concerns about the increasing number of scams while minimizing the complaints about the handling of user data that the companies are facing have been following social media for years.
When Meta shut down its facial recognition system in 2021 and deleted the facial data of a billion users, it cited “growing societal concerns.” In August of this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit that accused it of illegally collecting biometric data.
At the same time, Meta is facing lawsuits accusing the company of not doing enough to stop celebrity-baiting scams. Images of famous people, often generated by artificial intelligence, are used to trick users into investing money in non-existent investment programs.
Under the new process, the company said it will immediately delete any facial data generated through comparisons to suspicious advertising, regardless of whether fraud has been detected.
The tested tool underwent Meta’s “robust data protection and risk assessment process” internally and was discussed with regulators, policymakers and external data protection experts before testing began, Bickert said.
Meta also plans to test using facial recognition data to help non-celebrity users of Facebook and another of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to a forgotten password.