Deepfake Watch 24

Visa, Mastercard Complicit In Deepfake Porn | Deepfake Watch

Browser View | July 12, 2024 | Subscribe

While the cashless payment giants have earlier claimed that their cards will no longer function on websites selling non-consensual sexually explicit deepfakes, a popular website is still selling away such content through Visa and Mastercard.

Meanwhile, Spain has sentenced 15 school kids for creation and distribution of AI-generated nudes of their minor classmates.

Opt-in to receive the newsletter every Friday.

Fan-Topia thrives on hidden links, with Visa and Mastercard

A recent investigation by NBC News revealed that a highly popular website hosting deepfake pornography has been selling such illegal content through the popular credit card services.

NBC News found that creators on the site Fan-Topia have been operating using hidden links via the system called “hidemylink.vip” to add a paywall between their deepfake content and the free public pages.

The article adds, “On Fan-Topia, users can add money to a wallet, then use it to subscribe to creators. The deepfake creators are not searchable on Fan-Topia and their profile links are constantly changing, but the “hidemylink” service allows subscribers to return to creators’ pages they previously subscribed to at any time, including before April 2023.”

Both Visa and Mastercard had previously stated that their services will not be available on sites that sell non-consensual sexual content. But Fan-Topia advertises the sale of their content through both these cards, and NBC News was able to purchase 900 videos on Fan-Topia for just $15.

Non-consensual pornography poses one of the biggest AI-led threats, primarily targeting women. While regulatory measures have been introduced in many countries, the problem continues to persist, as creators and distributors are able to cover their tracks using anonymising tools.

Clare McGlynn, Professor of Law at Durham University, who specialises on legal regulation around pornography, suggested on a LinkedIn post that future regulations around deepfake pornography should also criminalise the solicitation of such non-consensual content, along with creation and distribution.

15 school kids sentenced by Spain court for deepfake porn of classmates

Last year, a school in Almendralejo in Spain hit the news for all the wrong reasons - AI-generated nudes of some female students went viral, upsetting parents and students alike.

It was also one of the cases that brought to light the notorious nudify tool called Cloth-Off, and had kick-started the debate around the dangers posed by generative AI.

Last Tuesday, a court in the south-west city of Badajoz in Spain convicted 15 minor schoolchildren for the creation of images of child abuse, of their minor classmates.

The minors were sentenced to a year’s probation, and were ordered to attend classes on gender equality and awareness, along with “responsible use of technology”.

Testing Out The New Meta AI

You might have noticed a blue and purple coloured circle appearing on Meta-owned platforms. This is the logo of Meta AI, which the company has now integrated into most of its services to launch its new LLM, Llama 3.

Decode’s Hera Rizwan tried out different prompts to test out this new AI. She found that Meta AI could potentially identify a false claim correctly, when asked directly.

For example,when asked whether Narendra Modi’s denial of spreading ‘Hindu- Muslim’ animosity in election speeches was true, the chatbot negated the claim, while quoting reports from The Quint and Al Jazeera. “Meta AI even enumerated the instances from the Prime Minister's speeches to validate its response,” Hera found.

She also used claims previously debunked by BOOM, and the chatbot was able to refute them with proof.

However, the chatbot started hallucinating when it came to curating news reports based on false claims. When prompted to write a news story about ‘Ayodhya Ram Mandir being added to the list of UNESCO cultural sites’, it generated an entirely fictitious story about how this move is "expected to boost religious tourism in Ayodhya, attracting pilgrims and tourists worldwide” and how “the Indian government and people are celebrating this decision, which is seen as a proud moment for the nation.”

A Meta spokesperson told Decode, "Our generative AI consumer features are new technologies and as we’ve seen with other companies’ generative AI models, and is denoted in our own feature experience, they might return inaccurate outputs. We’re taking several steps to identify potential vulnerabilities, reduce risks, enhance safety, and bolster reliability."

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe