Meta is expanding tests of facial recognition as an anti-scam measure to combat celebrity scam ads and more broadly, the Facebook owner announced Monday.
Monika Bickert, Meta’s VP of content policy, wrote in a blog post that some of the tests aim to bolster its existing anti-scam measures, such as the automated scans (using machine learning classifiers) run as part of its ad review system, to make it harder for fraudsters to fly under its radar and dupe Facebook and Instagram users to click on bogus ads.
“Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites where they are asked to share personal information or send money. This scheme, commonly called ‘celeb-bait,’ violates our policies and is bad for people that use our products,” she wrote.
“Of course, celebrities are featured in many legitimate ads. But because celeb-bait ads are often designed to look real, it’s not always easy to detect them.”
The tests appear to be using facial recognition as a back-stop for checking ads flags as suspect by existing Meta systems when they contain the image of a public figure at risk of so-called “celeb-bait.”
“We will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures,” Bickert wrote. “If we confirm a match and that the ad is a scam, we’ll block it.”
Meta claims the feature is not being used for any other purpose than for fighting scam ads. “We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose,” she said.
The company said early tests of the approach — with “a small group of celebrities and public figures” (it did not specify whom) — has shown “promising” results in improving the speed and efficacy of detecting and enforcing against this type of scam.
Meta also told TechCrunch it thinks the use of facial recognition would be effective for detecting deepfake scam ads, where generative AI has been used to produce imagery of famous people.
The social media giant has been accused for many years of failing to stop scammers misappropriating famous people’s faces in a bid to use its ad platform to shill scams like dubious crypto investments to unsuspecting users. So it’s interesting timing for Meta to be pushing facial recognition-based anti-fraud measures for this problem now, at a time when the company is simultaneously trying to grab as much user data as it can to train its commercial AI models (as part of the wider industry-wide scramble to build out generative AI tools).
In the coming weeks Meta said it will start displaying in-app notifications to a larger group of public figures who’ve been hit by celeb-bait — letting them know they’re being enrolled in the system.
“Public figures enrolled in this protection can opt-out in their Accounts Center anytime,” Bickert noted.
Meta is also testing use of facial recognition for spotting celebrity imposer accounts — for example, where scammers seek to impersonate public figures on the platform in order to expand their opportunities for fraud — again by using AI to compare profile pictures on a suspicious account against a public figure’s Facebook and Instagram profile pictures.
“We hope to test this and other new approaches soon,” Bickert added.
Video selfies plus AI for account unlocking
Additionally, Meta has announced that it’s trialling the use of facial recognition applied to video selfies to enable faster account unlocking for people who have been locked out of their Facebook/Instagram accounts after they’ve been taken over by scammers (such as if a person were tricked into handing over their passwords).
This looks intended to appeal to users by promoting the apparent utility of facial recognition tech for identity verification — with Meta implying it will be a quicker and easier way to regain account access than uploading an image of a government-issued ID (which is the usual route for unlocking access access now).
“Video selfie verification expands on the options for people to regain account access, only takes a minute to complete and is the easiest way for people to verify their identity,” Bickert said. “While we know hackers will keep trying to exploit account recovery tools, this verification method will ultimately be harder for hackers to abuse than traditional document-based identity verification.”
The facial recognition-based video selfie identification method Meta is testing will require the user to upload a video selfie that will then be processing using facial recognition technology to compare the video against profile pictures on the account they’re trying to access.
Meta claims the method is similar to identity verification used to unlock a phone or access other apps, such as Apple’s FaceID on the iPhone. “As soon as someone uploads a video selfie, it will be encrypted and stored securely,” Bickert added. “It will never be visible on their profile, to friends, or to other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison regardless of whether there’s a match or not.”
Conditioning users to upload and store a video selfie for ID verification could be one way for Meta to expand its offerings in the digital identity space — if enough users opt in to uploading their biometrics.
No tests in UK or EU — for now
All these tests of facial recognition are being run globally, per Meta. However the company noted, rather conspicuously, that tests are not currently taking in the U.K. or the European Union — where comprehensive data protection regulations apply. (In the specific case of of biometrics for ID verification, the bloc’s data protection framework demands explicit consent from the individuals concerned for such a use case.)
Given this, Meta’s tests appear to fit within a wider PR strategy it has mounted in Europe in recent months to try to pressurize local lawmakers to dilute citizens’ privacy protections. This time, the cause it’s invoking to press for unfettered data-processing-for-AI is not a (self-serving) notion of data diversity or claims of lost economic growth but the more straightforward goal of combating scammers.
“We are engaging with the U.K. regulator, policymakers and other experts while testing moves forward,” Meta spokesman Andrew Devoy told TechCrunch. “We’ll continue to seek feedback from experts and make adjustments as the features evolve.”
However while use of facial recognition for a narrow security purpose might be acceptable to some — and, indeed, might be possible for Meta to undertake under existing data protection rules — using people’s data to train commercial AI models is a whole other kettle of fish.