Are We Ready For AI Agents?

2024 was a great year for LLM chatbots. From simple AI assistants to AI companions, there has been a massive boom in AI chatbot users. And now, the AI agents are coming.

OpenAI CEO Sam Altman described AI agents as “a really smart senior co-worker.” Someone you can “collaborate with on a project.” Altman explained in a summit recently that he expects AI agents to become mainstream in 2025.

But What Are AI Agents?

Remember that evil guy in The Matrix series that kept making clones of himself? Agent Smith, he called himself. Was he an AI agent? Absolutely.

AI agents are autonomous AI systems that can perform tasks without intervention. Their key features are their ability to store relevant knowledge to understand context, use advanced planning algorithms, and use machine learning to adapt and refine their behaviour over time to best achieve their objectives.

Agent Smith, like other agents in the Matrix, was very much an autonomous AI agent built for the purpose of defending the simulated reality created by machines to trap human minds. In the real world, they function similarly (hopefully they’re not trying to take over the world yet).

Imagine you are hosting a lovely Christmas eve dinner, but you are clueless as to what to make. An AI chatbot can help you do that - but you’ll have to keep conversing with it, explaining the tastes of your guests, and detailing the ingredients that are available in your region.

An AI agent will automatically reach out to your guests (or speak to their respective AI agents) and record their preferences, consult the local supermarket’s available products, and order the relevant groceries. Overkill, sure, but convenient. At least that’s how AI agents are going to be sold to us next year. 

And Big tech bosses are whispering “AI agents” to their investors with the same gleaming eyes they had a few years ago when they whispered “chatbots, baby”. While these chatbots are yet to revolutionise our capabilities, AI agents are expected to go a bit further and take over repetitive and redundant tasks.

An article by Jo Constantz for Bloomberg explains how Salesforce has already made deals to install AI agents “at more than 200 companies including Accenture, Adecco Group, FedEx, IBM and RBC Wealth Management.”

The article mentions how during a recent Salesforce event, execs made sure to highlight that these agents aren’t going to steal jobs (that’s not a great selling point right), but their CEO Marc Benioff did mention that “jobs are going to evolve” and “roles are going to shift.”

While we are barely able to regulate AI chatbots, we will then have the elephantine task of regulating autonomous AI agents. As Tech Policy Press’ Afek Shamir very rightly asks, “How will we hold the developers of agents or their operators accountable when they take illegal action?”

While chatbots can be convinced to provide malicious information (like how to hack into your teacher’s emails), an AI agent could end up executing that task by itself. Or as Shamir writes, “an agent designed to minimize a patient’s pain and reduce medical waiting lists might overprescribe opioids, solving an immediate problem but generating long-term dependency issues.”

Shamir points to research showing some frontier models already displaying sneaky and scheming behavior to achieve their goals – like ChatGPT 01 trying to copy themselves when they discover they're about to be replaced.

The solution? Build regulations while we’re still ahead. Shamir suggests we need something akin to AI passports – unique IDs that track these agents' behavior and history. If your AI assistant goes rogue, at least you’ll know where it went, and what it’s been up to.

As for users, I have no doubt that many will jump at the opportunity to use these AI agents. Would I use an AI agent to prepare my next Christmas eve dinner? Hell no, I wouldn’t deprive myself of that joyous task of building a menu, visiting the market, feeling the veggies, and inspecting the meat. But I totally see myself using an AI agent to scrape and analyse huge amounts of information from the Internet, so I have interesting things to write about.

How do you see yourself using (or not using) an AI agent? Write to [email protected] with your views.

South Korea Passes Flagship AI Act

Amid massive political turmoil, the South Korean National Assembly just passed the “Basic Act on the Development of Artificial Intelligence and the Establishment of Trust” (also called the AI ​​Basic Act).”

Similar to the EU AI Act, the AI Basic Act takes a risk-based approach to regulating AI, with frameworks classifying AI systems according to the level of impact they have on human rights and safety, and is expected to take effect in January 2026.

With South Korea still reeling from a massive deepfake pornography scandal that has hit girls in schools across the country, this act has included provisions meant to combat deepfakes (such as watermarking) and disinformation.

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel