- Deepfake Watch
- Posts
- Deepfake Watch 37
Deepfake Watch 37
Glitching Romance By AI Lovers | Deepfake Watch
Browser View | November 01, 2024 | Subscribe
Dating apps are passé, AI romance is the new thing. Platforms providing AI companions are ballooning, with users ready to shell out significant sums of money to seek out relationships with chatbots. Will this really resolve loneliness, or make it worse?
We have witnessed a few cases in the past few weeks that do not inspire confidence in such chatbots.
Opt-in to receive the newsletter every Friday.
Age of artificial love
Earlier this week, a report by the Australian Financial Review reported on how a former CEO of a pet-sitting company is now cashing in millions through a platform called Candy.AI - offering customisable AI girlfriends with NSFW capabilities.
Users have to pay $100 a year to start a virtual relationship with a fully customisable companion - giving you the option to fine tune personalities and looks to match your preferences. And unlike the more popular Character.ai - it allows explicit content.
Documents accessed by AFR show that the company made a profit of $1.1 million in revenue in just three months after launching, with a gross margin of 75 per cent. Sources in the company told the media outlet that its “annual recurring (revenue) surpassed $25 million for the 2024 financial year.”
The article mentions that the company - based in Malta - was “co-founded by Sydney-based tech executive Alexis Soulopoulos, who was a chief executive of the ASX-listed Mad Paws until January.”
Soulopoulos’ LinkedIn profile says he is the CEO of Ever.ai, but does not mention Candy.ai anywhere. However, a visit to Candy.ai’s website reveals that the company is registered to Ever.ai’s address, and payments made to Candy.ai show up as “Ever AI” on the bank statements.
A quick search for “founders of Candy.ai” led us to start-up directory platform Tracxn, where the co-founders of Candy.ai were listed as John Smith, Jack Jones and Jane Doe - three classic placeholder names. We could not find any more publicly available information providing details on Candy.ai’s co-founders.
Its creators may want to distance themselves from the company, but there is no doubt that the AI companionship industry is here to stay, and expected to compete with dating apps, and subscription services platforms like OnlyFans.
While there have been studies looking into treating loneliness with AI companions, we have seen some upsetting cases in the past few weeks which highlight the risks posed by such tech - especially for minors.
Few weeks ago, a hacker got hold of the AI companion platform Muah.ai’s database of users, which raised a lot of concerns. Firstly, the data showed that many users were seeking to engage in child abuse roleplaying through the platform. Secondly, the risk of such weak data governance means all the secret fantasies you’ve tried out on this site could be publicly available for those around you to find out.
In another heartbreaking case from the US, Megan L. Garcia - the mother of 14-year-old Sewell Setzer III - alleged that his son took his life with a gun, after being hooked on a chatbot modelled after Game of Thrones character Daenerys Targaryen on Character.ai. In her lawsuit against Character.ai, Garcia alleged that the company was using user data to keep them hooked on the chatbot as much as possible, which posed significant risks to minor users.
While Character.ai is a big company run by well-known people in the industry (with close links to Google), others like Muah.ai and Candy.ai are highly obscure, with very little known information on the people running these platforms. That does not stop them from making money.
The AI companionship industry may be at its nascent stage, but it's picking up fast, and is mostly unregulated. If left unchecked it could bring a whole new world of trouble, surpassing those posed by social media platforms.
OpenAI’s transcription tool is making up weird things, say researchers
“We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition,” says OpenAI’s page on its transcription tool Whisper.
However, a recent article published by Associated Press shines light on how the tool is spicing up its transcriptions with completely fabricated text – we're talking imaginary medical treatments, unexpected racial commentary, and even the occasional violent narrative.
📖 Read: Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News
While this is troubling in itself, the article mentions how Whisper-based transcription tools are being used across medical centres “to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in high-risk domains.” Faulty transcriptions could severely increase the chances of misdiagnosis.
A study by Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia reveals how prone Whisper is to improvising transcriptions with unrequested dramatic flair.
“He, the boy, was going to, I’m not sure exactly, take the umbrella,” was transcribed to, “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.” Someone describing, “two other girls and one lady,” was transcribed to, “two other girls and one lady, um, which were Black.”
In one of the transcriptions, the tool was found to invent its own medical term called “hyperactivated antibiotics.”
The study found that nearly 40% of these creative additions were potentially harmful, as they fundamentally misrepresented what speakers actually said. While researchers are still scratching their heads about why these hallucinations occur, they've noticed they typically pop up during pauses, or when there's background noise.
UK AI child porn creator gets 18 Years in prison
A 27-year-old graphic design student named Hugh Nelson has been sentenced to 18 years in prison for using artificial intelligence to create and distribute illegal images of minors, marking one of the more severe sentences handed down for AI-led crimes in the UK.
Nelson’s defence lawyer told the media that his client was a "lonely, socially isolated" man who had "plunged down the rabbit hole to this sort of fantasy life and became completely engrossed in it,” according BBC’s report.
The defendant, who made £5,000 over 18 months from selling AI-generated illegal content internationally, pleaded guilty to 16 child sexual abuse offenses. Law enforcement officials emphasise that AI-generated illegal content carries the same legal weight as traditional illegal imagery, setting a crucial precedent: the law will apply equally to synthetic and real illegal content.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe