- Deepfake Watch
- Posts
- Deepfake Watch 31
Deepfake Watch 31
The Dangerous Waters Of Facial Recognition | Deepfake Watch
Browser View | September 20, 2024 | Subscribe
Facial recognition tech (or FRT) has been featured in countless sci-fi novels and films as one of the major features of dystopian authorities. It is now rapidly creeping into our daily lives.
India currently has around 170 FRT systems installed in the country, as per Internet Freedom Foundation. And the risk-reward ratio does not look too good for us.
Meanwhile, California’s government just signed some of the strictest US laws on political deepfakes, and a man is suing the state for it.
Opt-in to receive the newsletter every Friday.
Being Watched By Computers
Late American sci-fi author Philip K Dick had predicted various forms of FRT in many of his books. For those interested in engaging with philosophical ideas around power and control concerning advanced surveillance, I’d highly recommend The Minority Report, and A Scanner Darkly (both of them have brilliant film adaptations as well).
Dick argues through these books that a society under complete surveillance, along with the technology to recognise faces in real time, could have massive psychological aftermaths in people. He also argues that such a society will be split in two - whereas the ultra-rich and powerful would hold the reins of such technology, and will thus be free from its implications, and the “rest” of the folks would be at their mercy, under complete control.
Larry Ellison, the billionaire founder of American tech giant Oracle, is totally into FRT.
During the Oracle financial analysts meeting earlier this month, a chirpy Ellison shared his views on a future where AI-led surveillance is commonplace, and it's as dystopian as it gets.
"Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behaviour because we are constantly recording and reporting everything that's going on," Ellison said at the meeting.
Philip K Dick is likely doing backflips in his grave.
Like most big tech companies, Oracle is pushing for AI strongly, and for good reason. The company is heavily invested in hardware and software needs to support advanced AI applications and workloads. A mass surveillance system running on AI applications powered by Oracle sounds rather good for business. But how close are we to adapting FRT at such a scale?
The Internet Freedom Foundation launched an FRT tracker called Panoptic earlier this year, which tracks and lists all the instances of FRT either being deployed by the state, or under process of adoption.
According to the tracker, 170 FRT systems are currently installed in India, with Maharashtra leading with 19 such systems.
While there is ample surveillance in the country, with CCTVs in railway stations, airports, malls and other public places - most of them are manually supervised. What would be the implications of wide-scale FRT integration?
Experts and activists have pointed out that FRT is very often capable of false negatives, and false positives, when it comes to identification. In 2018, computer scientist and activist Joy Buolamwini co-authored a landmark research paper that highlighted the strong prevalence of gender and racial bias in commercial FRT systems.
Their study revealed massive error rates from people with dark skin, and for women, when it came to recognising faces. Dark-skinned women had the worst error rates at 34.7 %, as compared to 0.8% for light-skinned men. Adoption of such FRT into surveillance systems could lead to high rates of false incarcerations.
Furthermore, AI-led surveillance at the hands of an authoritarian system could spell disaster for its people, as it would enable an extreme form of control. Currently, China is leading in FRT integration into its mass-scale surveillance system. While crime rates are low, there is absolutely no space for dissent in the country - whether offline or online, and any abuse of power by the government would go thoroughly unchecked.
Another problem posed by FRT systems is the risk of data breach. As such systems are built using databases of digital signatures of individual faces, a breach of such data could put such individuals at great risk.
In May, one such case came to light after a massive breach of data linked to an FRT system being used in bars and clubs across Australia, run by a company named Outabox. The breached data allegedly included “facial recognition biometric, driver licence [sic] scan, signature, club membership data, address, birthday, phone number, club visit timestamps, slot machine usage”, according to a report by Wired.
US based facial recognition company Clearview AI, which primarily provides FRT to law enforcement and government agencies, has been fined 30.5 million euros by the Dutch Data Protection Authority (Dutch DPA) for building an illegal facial database, which the Dutch DPA alleges has been made by scraping data from social media accounts of people without their consent or knowledge.
With authorities and tech companies being too eager to adopt FRT-led surveillance, the public discourse on such a topic currently lacks a proper analysis of the risks of implementing it in mass-scale.
California’s Strict Ban On Political Deepfakes Faces Lawsuit
California Governor Gavin Newsom recently signed three laws to tackle election-related deepfakes ahead of the US presidential election in November.
These laws would force social media platforms to label AI-generated content, or remove it entirely, while preventing users from sharing misleading political deepfakes within 120 days of an election.
Shortly after, an individual named Christopher Kohls, who goes by the moniker Mr Reagan on X, filed a lawsuit against the state, saying that these new laws would curtail freedom of speech by allowing people to take legal action against any content that they dislike.
Kohls became known for making parody videos on Democratic presidential candidate Kamala Harris using voice-clones. One of his videos was shared by the owner of X, Elon Musk himself, which started a spat between Musk and Newsom, eventually leading to the recently-signed laws.
Musk, who has been consistently sharing misinformation on his platform, criticised this move by comparing Newsom to the comic book villain Joker.
Latest In AI And Deepfakes
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe