Deepfake Watch 21

Deepfake Porn Bills Divide US Lawmakers | Deepfake Watch

Browser View | June 21, 2024 | Subscribe

A new bill to tackle deepfake porn was introduced in the US Senate this week, but as it clashes with yet another similar bill tabled earlier this year, legislation can get tricky.

Meanwhile, as the country gears up for presidential elections, the risks posed by deepfakes on the integrity of the polls are starting to mount.

Opt-in to receive the newsletter every Friday.

Take It Down vs DEFIANCE

This week, a bipartisan group of US senators, led by Ted Cruz, introduced the Take It Down Act, to address the issue of deepfake pornography. The bill would require online websites to take down non-consensual deepfake pornographic images within 48 hours of reporting.

The new bill also seeks reasonable efforts from online platforms to remove similar copies of the images being reported, while the US Federal Trade Commission is tasked with its enforcement.

The country has seen a rapid growth in cases of deepfake pornography, with high school students to politicians and celebrities becoming targets through the ever-increasing accessibility to AI-powered editing tools. A report by Home Security Heroes found a 464% increase in the output of deepfake pornography in 2023, as compared to the year before.

Tackling the issue of deepfake pornography has united US lawmakers from across party lines, but there seems to be a lack of agreement on how to go about regulating such a menace.

In January this year, yet another bipartisan bill was introduced by Senators Dick Durbin, Lindsey Graham and Josh Hawley, called the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), that would “hold accountable those who are responsible for the proliferation of nonconsensual, sexually-explicit “deepfake” images and videos”.

This act would enable victims of non-consensual deepfake pornography to sue the creators and distributors of such content.

However, Senator Cynthia Lummis objected to the passage of the bill last week, calling it “overly broad in scope” and detrimental to “American technological innovation”, to which Durbins responded by saying that the bill poses no liability for tech platforms.

Interestingly, Lummis is also one of the co-sponsors of the Take It Down Act, which shifts the responsibility to online platforms for the removal of deepfake porn images.

What’s in store for the US elections?

The recently concluded election season in India did not witness a deepfake catastrophe as many had expected.

Professor Mayank Vatsa, whose department at the Indian Institute of Technology Jodhpur has built a deepfake detection tool called Itisaar, told my colleague Karen Rebelo that this could be due to lack of training data on Indian local languages.

“Most of the existing deepfake research and generation efforts, predominantly in the West, focus on the English language,” he noted. This suggests that the deepfake creation tools available online could pose a much larger threat to the upcoming elections in the US.

We have already seen quite a few - the Joe Biden robocall in New Hampshire, the flurry of AI-generated images of Donald Trump with black voters, an image of Anthony Fauci hugging Trump, another AI-generated robocall of Lindsey Graham in South Carolina.

A study by George Washington University predicted an “escalation of daily, bad-actor activity driven by AI by mid 2024”.

Yet another recent article by Nature highlighted the multifaceted impacts of AI-led electoral misinformation, which does not necessarily have to influence people’s ideas and voting behaviour, but can do a lot of damage by misleading voters about when and where to vote.

“Furthermore, just knowing that misinformation is out there — and believing it is influential — is enough for many people to lose faith and trust in robust systems, from science and health care to fair elections,” the article added.

“We might not expect widespread effects across the whole population, but it might have some radicalizing effects on tiny groups of people who can do a lot of harm,” Gregory Eady, a political scientist at the University of Copenhagen, told Nature.

Spot the deepfake - a robust guide

Negar Kamali, a computer science researcher at Northwestern University published a detailed thread on how to distinguish between real and AI-generated images.

The tweet also contains a link to a 54-page guide “to help readers develop a more critical eye toward identifying artifacts, inconsistencies, and implausibilities” in AI-generated images.

Latest in deepfakes and AI

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe