AI Is Dulling Our Brains

In partnership with

AI can solve climate change. It can also solve the energy crisis, deforestation, food distribution, wealth distribution, pandemics, and maybe even cancer (someday). But right now, it is dulling the edges of our brains, one prompt at a time.

Let me explain.

Don’t Let AI Think For You

A recent study by the Massachusetts Institute of Technology found that extensive use of Large Language Models (LLMs) like ChatGPT, especially for tasks that require deep thinking, makes our brains lazier.

The study split 54 participants into three groups and gave a number of essay writing tasks that required deep thinking. One group had access to LLMs, one group had access to Search Engines, and the other group carried out the tasks without any tools, using the good old brain.

In a surprise 4th session, the LLM group was asked to do the task without any tool, while the brain-only group was given access to LLMs. During every session, researchers monitored the brain activity of participants using an EEG.

The group using LLMs “had the lowest brain engagement and consistently underperformed at neural, linguistic, and behavioral levels.” The brain-only group demonstrated the highest brain activity and neural connectivity. LLM users also had trouble remembering what they wrote, and their written works were found to be similar, while the brain-only group demonstrated curiosity and a diverse range of ideas.

When the LLM-only group was asked to write without any tools, they struggled to get it done, whereas the brain-only group performed better and retained more when they had access to ChatGPT. This exhibits that AI, if used correctly, could aid us in learning.

As long as you don’t let AI do the thinking for you.

A similar study was published by Carnegie Mellon University and Microsoft Research earlier this year, which found that overreliance on ChatGPT could adversely affect critical thinking.

I am sure there are people around the world trying their best to leverage this technology for our betterment. The problem is, a large number of folks are using LLMs to do the most pointless and cringe-worthy activities.

AI Is Feeding An Addiction To Vicarious Trauma

A devastating plane crash in India’s Gujarat killed at least 270 people last week, and social media was instantly flooded with AI-generated images claiming to show the wreckage. 

We also received a 5-minute-long film made entirely using AI, which shows a reenactment of the moments leading up to the crash for the passengers and pilots.

Shortly after, the Israel-Iran conflict erupted, and highly-realistic AI visuals claiming to show destruction in both countries went viral.

In April, a terrorist attack in Kashmir killed 26 civilians. Social media users took an image of one of the survivors sitting next to the body of her slain husband, added the Ghibli filter and other AI-led dramatisations, and made them viral.

Last year, a terrible case of rape and murder of a doctor in Kolkata created widespread outrage India. My colleague Karen found a number of influencers exploiting the outrage and using face-swap tools to create content using the victim’s face, pretending to be the deceased doctor.

Responsible use of AI means using it strategically to aid your critical thinking, not compete with it. And it definitely does not include cashing in on tragedy porn.

MESSAGE FROM OUR SPONSOR

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel