- Deepfake Watch
- Posts
- Our Phones Need In-Built Deepfake Detectors
Our Phones Need In-Built Deepfake Detectors
Among all the negative uses of AI, deepfakes surely take the cake. Sophisticated frauds to revenge porn and child sexual abuse material - deepfakes have stirred up a lot of muck. But the world is fighting back.
February 2, 2025, marked the first compliance deadline of the EU AI Act, which is strict on deepfakes - demanding transparency and proper labeling for AI-generated content, including mandatory technical markers like watermarks for synthetic media.
In response, financial institutions and enterprises are turning to more sophisticated tools that analyse media for manipulation artifacts – highlighting a broader trend toward layered defenses.
What's clear is that no single approach will be sufficient. We need watermarks AND detection tools AND better authentication AND strong legislation AND educated users. We’re a long way from being secure.
Honor smartphones will have a really cool feature soon
When it comes to fraud, a deepfake call in real-time can lead to catastrophic outcomes. For example, a Chief Financial Officer in Hong Kong was tricked into parting with US$25 million last year, after a call with a deepfake impersonator of the company’s CEO.
It’d surely be great if our smartphones could give us a little indication that the person we’re video calling is a machine-made impersonation of whoever we think we are talking to. Chinese smartphone maker Honor has gotten the hint, and is soon about to introduce a deepfake detection feature in April.
The technology, which will be integrated into its high-end models, examines video calls frame-by-frame, analysing eye contact, lighting, image clarity, and video playback to spot inconsistencies invisible to us.
When something suspicious is detected, users receive a popup warning: "Honor scam alert. It looks like the other person could be using AI to swap their face."
Now, I don’t really use Honor phones, and I don’t intend on buying a phone just for this feature. But it would be great to see other smartphone makers get the hint as well and start introducing such deepfake detectors.
UK to criminalise CSAM AI-generators
The UK Home Office just announced fresh measures to combat the rising threat of AI-generated child sexual abuse material (CSAM).
As part of the upcoming Crime and Policing Bill, the country is aiming to criminalise the possession, creation or distribution of AI tools that could be used to create CSAM, with a five-year prison sentence for offenders. This would make the UK the first country to go beyond policing AI-generated CSAM content, and aim for the AI-generation tools themselves.
Additionally, the bill will make it illegal to possess "AI paedophile manuals" – materials that teach people how to use AI to generate CSAM. A specific offense will be introduced for those who run websites allowing paedophiles to exchange illegal content and grooming advice.
MESSAGE FROM 1440 MEDIA
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers