- Deepfake Watch
- Posts
- As Risks Mount, Disunity Marks Global AI Summit
As Risks Mount, Disunity Marks Global AI Summit
World leaders and tech executives had a gathering in Paris last week to figure out a united plan to develop, regulate and integrate AI. Yet, disunity was the highlight of the meeting.
Meanwhile, AI’s disruptive potential continues to grow, along with the risks.
Run for your money
In 2023, the US saw its second-largest bank run - the collapse of Silicon Valley Bank, fuelled by social media rumours. A new study highlights how AI could worsen the threat of bank runs by turbocharging disinformation.
Say No to Disinfo and Fenimore Harper Communications simulated an AI-driven disinformation campaign, with the objective of causing a bank run in the UK.
For the simulation, AI was used to create fake news articles on the potential collapse of major UK banks, mimicking trusted sources. AI was then used to generate 1000 posts under one minute to amplify these fake articles.
A random sample of 500 people across the country was chosen and exposed to these posts and articles, and then asked if they would move their money to another bank, and share this information with others.
60.8% of them said they would consider moving their money, with 33.6% being “extremely likely” to do so, and 27.2% being “somewhat likely.” Additionally 60% of polled individuals said they would share such information with 1-3 people, while 20% would share with more than 3 people.
The study estimated that such an ad shown to 1000 people, costing less than £10, could end up moving nearly £1 million. It estimated a cost of US$2,700-4,500 to move 30% of total loans of a bank like Revolut, significantly impacting the bank’s liquidity and stability.
The study also pointed out how banks are more concerned with cyber threats, and tend to ignore influence operations, and lack expertise on disinformation.
No Tango in Paris
French President Emmanuel Macron kicked off the AI Action Summit in Paris last week by showing a montage of deepfakes impersonating him in front of a thousand people from more than 100 countries.

Among those present were heads of state, government leaders, tech executives, academics and civil society members.
The summit focused on accelerating AI development and managing AI transition–while protecting individual freedom from risks of increased cyberattacks and misinformation. Discussions also took place on making AI more environmentally friendly.
Finally, a declaration of “Inclusive and Sustainable Artificial Intelligence for People and the Planet” was introduced, and signed by 61 countries, including China. However, it was those who didn’t sign which was more notable - the US and UK.
Read: Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet. | Élysée
The UK claimed the document lacked clarity on global governance and national security, while US Vice President JD Vance essentially told Europe to stop being such a buzzkill about AI regulation (he didn’t like the bits about environment and inclusiveness).
Without the support of the UK and US (still the global AI leader), it remains to be seen how effective this declaration will be in ensuring that the positive impacts of AI reach us before the really bad and scary ones.
Why Gen AI tools have a hard time showing left-handed humans?
At the summit in Paris, Indian Prime Minister Narendra Modi claimed that AI struggles to depict left-handed people.
Hera Rizwan at Decode decided to test this out, and used some popular tools like MetaAI and Dall.E - and found that Modi was not wrong in his assessment.
Read Hera’s story to find out more about the limitations of generative AI.
What are your views on AI regulations? |
MESSAGE FROM 1440 MEDIA
Looking for unbiased, fact-based news? Join 1440 today.
Upgrade your news intake with 1440! Dive into a daily newsletter trusted by millions for its comprehensive, 5-minute snapshot of the world's happenings. We navigate through over 100 sources to bring you fact-based news on politics, business, and culture—minus the bias and absolutely free.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers