Ukraine’s deputy prime minister says the technology will help provide transparency about how many Russian soldiers are being killed in the war. Critics say that using facial recognition in war zones is a disaster.
Ukraine is using Clearview AI facial recognition to identify dead Russian soldiers as part of a campaign to combat the Kremlin’s misinformation on the number of war dead.
Sergei Supinsky / AFP via Getty Images
See a picture of a dead Russian soldier on social media. Upload it to facial recognition software. Find an identity match from a database of billions of social media images. Identify the family and friends of the deceased. Show them what happened to Putin’s war victims and defense of Ukraine.
It is one of Ukraine’s strategies in trying to inform Russia, which has limited access to non-state-controlled media and information, about the death of its president from the invasion. On Wednesday, Deputy Prime Minister and head of the Ministry of Digital Transformation in Ukraine, Mykhailo Fedorov, confirmed on his Telegram profile that surveillance technology was being used in this way, weeks after New York-based facial recognition provider Clearview AI. , began to offer its services to Ukraine for the same purposes. Fedorov did not specify which brand of artificial intelligence was being used in this way, but his department later confirmed it Forbes That it was Clearview AI, which is making its software available for free. They’ll have a good chance of having some matches: In an interview with Reuters earlier this month, Clearview CEO Hon Ton-That said the company had a store of 10 billion users’ faces from social media, including Russian 2 billion from Facebook were involved. Alternative Vkontakte. Fedorov wrote in a Telegram post that the ultimate aim was to “dispel the myth of a ‘special operation’ in which ‘no soldiers’ and ‘no one dies’.”
Just a month ago, Clearview AI and facial recognition were heavily criticized. US lawmakers condemned its use by the federal government, saying the technology disproportionately targeted black, brown and Asian races and mismatched them more often than white individuals. He also clarified the existential threat to the privacy of the software. Civil rights organizations such as the American Civil Liberties Union believe the technology should not be used in any setting that calls for outright restrictions.
The use case in Ukraine, of course, is quite different from those commonly seen in the US, which attempt to identify criminal suspects. Identifying dead Russian soldiers may be more acceptable, if the ultimate aim is to let people know that their loved ones died as a result of their leader’s war. Not to mention that the dead don’t have a right to privacy – not by US law anyway. This is one reason why police are allowed to unlock the deceased’s iPhones or other smart devices by holding them to their faces (even if they don’t have much success, due to live detection). But should privacy advocates worry about the use of facial recognition in a time of war, when it could legalize the technology for use in other scenarios where occupant privacy is at risk?
As for Ukraine, it believes that there is a need to identify the dead Russian soldiers, as there is much controversy over the number of dead military personnel. Last week, a Russian newspaper published and later removed a report that claimed nearly 10,000 Russian soldiers had been killed since the invasion began, far more than previously thought. Tabloids later claimed that it had been hacked and that the figures were not accurate. Ukraine believes that Russia is lying to its citizens about the death toll.
But Albert Fox Kahn, founder of the Surveillance Technology Oversight Project, said the introduction of facial recognition into war could be disastrous, even if Ukraine is using it to tell the truth to Russian citizens. “It is a human rights catastrophe in the making. When facial recognition goes awry in peacetime, people are wrongly arrested. When there is a mistake in facial recognition in a war zone, innocent people are shot,” he said. Forbes,
“I am horrified to think how many refugees will be wrongfully stopped and shot at checkpoints because of a facial recognition error. We must support the people of Ukraine with the air defenses and military equipment they demand, not by turning this heartbreaking war into a place for product promotion. ,
Facial recognition has also been shown as mismatching, misidentifying images of people’s faces for misidentification. In the US, this has happened at least three times to black individuals who were falsely arrested because their faces incorrectly matched footage from surveillance cameras.
As Kahn put it, “When facial recognition essentially misidentifies the dead, it will mean heartbreak for the living.”
When asked about those concerns or the use of its technology, Clearview AI CEO Hoan Ton-That said, “Battle zones can be dangerous when there is no way to tell enemy combatants other than civilians. Facial recognition technology can help reduce uncertainty and increase safety in these situations.”
He said US-government-funded tests showed that Clearview could “pick the right face from a lineup of more than 12 million photos at an accuracy rate of 99.85%.” That accuracy will “prevent misidentification in the field.”
“Ukrainian authorities who have gained access to Clearview AI have expressed their enthusiasm, and we look forward to hearing more from them. We are ensuring that everyone with access to the device has access to it safe and sound. Be trained to use responsibly,” he said.
Whatever the moral in the game, the use of facial recognition in this battle is notable in its use as a tool in propaganda warfare. Or as Ukraine would call it, the war for truth. Even Fedorov didn’t think he’d use technology for this before the attack, he wrote in his Telegram post, “We’ve all changed. We started doing what we’ve been doing for a month.” Couldn’t even imagine before.”