- Deepfake Watch
- Posts
- Deepfake Watch 11
Deepfake Watch 11
The Many Jargons Of AI | Deepfake Watch
Browser View | April 12, 2024 | Subscribe
Last week’s newsletter included a segment on Lavender, an artificial intelligence-based system used by the Israeli army to create kill lists, who were then targeted without any human-level verification.
Now, some of you may think, why is a newsletter called Deepfake Watch talking about a tool that has nothing to do with deepfakes? This gives me the opportunity to clarify what this newsletter is really about, and in doing so, decipher some of the closely related terms that have inundated the internet lately.
Opt-in to receive the newsletter every Friday.
Let’s clear the confusion with some definitions
Artificial intelligence: A technology that attempts to simulate human intelligence in machines. Think of it as a programme that is able to make choices based on the information it has been provided. The term was first used in 1955 at a workshop by computer scientist John McCarthy to describe methods which would enable machines to perceive from their environment, as it carries out predefined tasks.
Machine learning: In 1959, computer scientist Arthur Samuel was giving a speech on teaching machines to play chess better than their human programmers, and came up with the term ‘machine learning’. This concept is a subset of AI that deals with the use of statistical algorithms that can learn from data, and generalise for unseen data through pattern recognition, in order to carry out tasks without explicit instructions.
Deep learning: A subset of machine learning that simulates the information processing systems of the human brain through artificial neural networks, to make decisions. Unlike previous machine learning programmes that could perform one specific task, deep learning programmes can be trained to learn from unstructured data without human supervision.
Natural Language Processing (NLP): Programmes trained to understand human languages, like text and speech recognition functions in your devices.
Generative AI: An AI model that has been trained in large datasets to generate text, video, images and code. The training data enables it to recognise patterns for the creation of novel content.
Large Language Model (LLM): An AI model that has been trained on large datasets of text in human language, which can not just recognise human language (like NLP), but also generate text in human language. It is a subset of generative AI.
Deepfake: The term is a portmanteau of ‘deep learning’ and ‘fake’, used to denote synthetic media in the form of image, video or audio, that has been digitally altered using AI to create the likeness of an individual.
Algorithm: A finite set of rules or instructions to solve a class of specific problems, or performing a computational task.
So while ‘deepfake’ is only a subset of this rapidly evolving field of artificial intelligence, I shall time and again discuss many of the other closely related concepts that I just mentioned.
The Marxist AI anchor
The Communist Party of India (Marxist) has often been found taking a backseat in the race to use technology for political gain.
Now, it has suddenly made a leap.
Meet Samata, an AI-generated anchor that will read news about CPIM’s Bengal unit. First reported by Bangla news outlet Sangbad Pratidin, this marks a unique use of AI by a political party in India.
This was not the first use of AI by the part either. Last year, the party released an AI-generated image of Karl Marx in Babylon.
Dipanjan Saha investigated this matter for Decode, and found that the party now intends to change the perception people have of CPIM being tech-ignorant.
CPIM leader Shamik Lahiri told him, “It is a long standing rumour that has stuck on with the Left that we had opposed the introduction of computers. That is not true at all. We had opposed job losses that were going to happen at that time. We view new technology as something positive that can make life easier for the working classes. Not as something that makes them lose work.”
The party has further plans to capitalise on the new technology for better reach in the upcoming elections. “We have many ideas. We can create a farmer, a factory worker or a rickshaw driver who can talk about issues affecting each of them,” Abin Mitra, from the party’s digital team, told Dipanjan.
From our fact-checks
Last week BOOM did the following fact-checks on AI-generated content, including a new scam video featuring Narayan Murthy:
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe