Deepfake Watch 32

Meta’s Aggressive AI Push | Deepfake Watch

Browser View | September 27, 2024 | Subscribe

Shrimp Jesus was just the beginning. The AI slop on Meta platforms is getting a boost, and soon your face shall be on it 🙄.

Meanwhile, CEO Mark Zuckerberg has big plans for the future of its AI products, all he asks for in return is your privacy.

Talking of privacy, Microsoft-owned LinkedIn did a sneaky little thing - it quietly signed up your LinkedIn data to train its AI models, but there’s a way to disable it.

A recent report highlights how prevalent the problem of deepfake porn is in US K-12 schools.

Opt-in to receive the newsletter every Friday.

“Let them see what you see, and hear what you hear”

At the recently held Meta Connect 2024, the company’s jubilant CEO announced a new batch of products and tools.

The showstopper of the event was the unveiling of Orion AR glasses - a small piece of state-of-the-art cutting-edge tool that would put Stark Industries in a bind. Among other announcements were the release of Llama 3.2, and Meta AI’s latest features, which shall soon embrace your Meta apps.

The company is rolling out new deepfake features for creators, to help them automatically translate their videos in other languages. Sounds like more voice-cloning and lip-syncing will be underway.

But what really caught my attention was Meta’s intrusive push for personalised AI-generated content. Meta AI’s Imagine feature will now generate AI content by itself and push it directly into your feed…. and some of them may actually contain your face.

🤖 “Imagine yourself as a galaxy-trotting cyborg in cowboy attire”. Yeehaw!

Speaking to former Vox producer Cleo Abram, Zuckerberg recently laid out a small piece of the future of ‘personalised’ AI that his company has in store for us.

“There's all this development that's going into making the models smarter and smarter over time but I think where this is going to get really compelling is when it's personalized for you. And in order for it to be personalized for you it has to have context and understand what's going on in your life both kind of at a global level and like what's physically happening around you right now. And in order to do that, I think that glasses are going to be the ideal form factor because they're positioned on your face in a way where they can let them see what you see and hear what you hear, which are the two most important senses that we use for kind of taking information and context about the world,” he said during the interview.

To do this, Meta’s AR glasses will need to record and store data on what you are hearing and seeing, in order to train its AI on how to be a better companion to you. The company would essentially know everything you see and hear, whenever you are wearing the glasses. GULP!

Here I was, already paranoid that my phone is listening to me, and likely using my phone’s camera to watch me when it can.

Deepfake Impersonator Targets US Senator

US Senator Ben Cardin, Chairman of the US Senate Foreign Relations Committee, was recently targeted with a highly sophisticated cyberattack.

The attackers used deepfake to impersonate former Ukrainian Foreign Minister Dmytro Kuleba in a video call, nearly convincing Cardin’s team that it was the real Kuleba. Cardin’s team only got a hint of the possibility of impersonation when the conversation went into suspicious, politically-charged topics.

From scams to revenge porn, deepfake has served as steroids to cybercriminals all over the globe, marking an evolution in cybercrime.

LinkedIn Did A Sneaky Little Thing

LinkedIn - the social media for businesses and job-seekers - very quietly enrolled all non-EU users on the platform to provide their data to train its AI models.

Users can opt out by going to Settings ⮕ Data Privacy ⮕ Data for Generative AI improvement. You can directly access the page by clicking here.

Gen AI Fuelled Sexual Harassment In US K-12 Schools

Washington D.C-based non-profit Center for Democracy & Technology surveyed public high school students, and parents and teachers of public middle and high schools from July to August this year, to understand the prevalence of deepfakes and non-consensual intimate imagery (NCII).

Here’s a summary of what they found:

  • NCII is a major issue in K-12 public schools, and it includes both deepfakes and authentic imagery

  • Female and LGBTQ+ students have increased threat perception from deepfake NCII

  • Most respondents feel schools are not doing enough to address the problem

  • Response from schools lack support to victims, focuses more on punishment

  • Parents are less aware of the problem, compared to students and teachers

Read the full report here.

Latest In AI And Deepfakes

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

Facebook icon
Instagram icon
Twitter icon
LinkedIn icon
YouTube icon
Website icon

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe