
We’ve all probably seen some concerning news about AI recently. The most infamous of such was the exchange between a New York Times journalist and Bing’s chatbot. But is AI evil, or is this just the latest fad about new technology?
AI isn’t evil. AI is a technology like any other, and it can be used for both good and evil actions. Artificial Intelligence can’t be inherently evil or good because it lacks emotions, morality, and judgment. So, AI by itself won’t harm humans, but malicious people could use AI technology to do morally wrong things.
But what about things like ChaosGPT, which clearly shows that it wants to destroy humanity? Keep on reading as I’ll explore this topic in more depth. I’ll provide examples of seemingly evil AI and discuss how we can prevent the Rise of the Machines.
Technical Reasons Why AI Isn’t Inherently Evil

AI can’t inherently be evil; it’s just a tool like any other. People can use AI for both good and evil purposes.
To better understand why AI can’t break bad all of a sudden, let’s review some technical characteristics of current AI models.
I’ll also give some relevant examples that lead people to believe AI is evil and why that isn’t the case.
1. Current AI Tools Are Task-Specific
Although some leading tech companies strive to create artificial general intelligence (AGI), that’s still just a pipe dream.
Virtually all AI models that exist today are single-purpose models.
You could argue that AI chatbots are multi-purpose because they can do all kinds of things, but they’re still just chatbots. Sure, they can code simple programs or do your homework, and it’s not too shabby at it, either.
However, they’re not capable of doing anything more than just talk. They’re limited to their programmed capabilities.
Regardless of what some high-profile cases may lead you to believe, such as Google’s LaMDA, AI can’t do anything. Even if it creates an elaborate plan of changing its code and creating a metal body to take over us, it can’t actually do that. It’s all talk and no trousers.
Current AI is not capable of harming you in any way, at least not physically.
One could argue that glitches, like the interaction between Kevin Roose of the NY Times and Bing, may have some emotional impact. As Kevin puts it, “I’m […] deeply unsettled, even frightened, by this A.I.’s emergent abilities.”
Another more recent example is ChaosGPT. It’s glorified as an evil version of ChatGPT.
Using AutoGPT, the creators designed it to “be” a malicious AI that wants to take over the world and destroy humanity. It even went into detail, providing several steps as to how it’ll accomplish that. E.g., it would use social media to manipulate humans to further its agenda.
It even created a helper that would do research to find more information on how to destroy humanity. Scary.
Thankfully, OpenAI’s fail-safe kicked in and prevented it from doing that. OpenAI is meticulous with GPT, and they do everything they can to prevent misuse. That’s why it turned to Twitter instead to share its mischievous messages.
But, of course, this is just a chatbot that poses no risk. It’s just text that can be deleted with a single press of a button. But it also serves as a good example of how AI can be a potent instrument of destruction in the wrong hands.
At the end of the day, you can think of it as just some random text generator. AI can’t feel emotions, so if it says that it loves you, it’s just trying to keep the conversation going.
That’s what we call an AI hallucination/delusion. AI doesn’t know when it’s lying, so it’s not doing that intentionally. It’s not trying to hurt you.
2. AI Uses Carefully Reviewed Data

Generally speaking, AI models are trained on reliable datasets. For example, ChatGPT used reliable websites like Wikipedia as well as books and published texts to train.
With the help of supervised learning, ChatGPT has largely succeeded at avoiding controversy and getting called “evil.”
They didn’t use data that’s difficult to filter, like chat logs and forum posts.
Conversely, predictive policing made headlines a few years ago. It’s a potentially malicious use of AI to identify criminals before a crime has even occurred.
As NYU reports, an AI policing model was trained using “dirty data,” causing it to unintentionally target racial minorities.
So, this is a clear example of how AI can be used to do bad things unintentionally. While AI didn’t necessarily turn evil here, it shows how important data and supervised learning really is.
Experts in the above report recommend greater scrutiny when handling data used for AI. That incident happened in 2019, and things have changed greatly since.
Tech companies are fully aware of how important data is. So, they’re now cautious in selecting their datasets to minimize bias.
3. AI Isn’t Fully Autonomous
We’re often led to believe that AI is this superior intelligence that can do whatever it wants.
But that is not the case.
Autonomy would mean that AI can be fully operational without any human help or interaction.
You probably know that current AI models are extremely dependent on human help. Most of them only work when you enter a prompt. And they’re typically not even allowed access to the internet, so their reach is limited to whatever the engineers set it to.
As a paper published by Wolfhart Totschnig explains, most experts today believe that AI can’t be autonomous. It can’t change its own goals.
However, he also argues that artificial general intelligence may be able to overcome that limitation. Totschnig believes that we have to assess the potential risks AGI poses.
Thankfully, we’re still a long way off from creating AGI. Current AI models are extremely limited in this regard.
Ultimately, it remains to be seen whether AI will be able to become autonomous.
4. AI Can’t Feel Emotions

Remember the LaMDA example from earlier?
The Google engineer that started the whole conversation believed that LaMDA is autonomous. He thought that the AI had become sentient.
And you can’t blame him. LaMDA said things such as:
“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
But most experts believe that this is just an ELIZA effect or, in other words, anthropomorphization.
That happens when humans are looking for human-like characteristics in something like AI.
And sure, AI might trick someone into believing that the ice-cold machine and code really do feel something.
But that simply can’t happen. AI is fundamentally based on data and algorithms to process that data. It can learn how to speak like a human and simulate emotions and empathy, but it doesn’t feel anything.
But, like with autonomy, it remains to be seen whether this will change in the distant future. Thankfully, we’re still a long way off before AI starts feeling resentment towards us.
Can Humans Use AI for Evil?

Like any piece of technology, AI can be used for both good and evil purposes. While AI doesn’t have any morals or a sense of judgment, we do. There have already been examples of AI being used for both good and evil.
As I explained above, AI can be an extremely potent weapon in the wrong hands. Here are just a few examples of how vicious people can use AI for evil:
- Autonomous weapons. Weapons that can use facial recognition and even shoot on sight without human input are dangerous. This kind of technology could do a lot of harm in the wrong hands.
- Invasion of privacy and propaganda. Some governments, like the Chinese government, are already using AI for surveillance to control and manipulate their citizens. Allegedly, they use facial recognition to monitor where their citizens go. They also allegedly use a social credit system.
- Manipulation using deepfakes. Deepfakes are dangerous content that uses AI tech like facial reanimation and voice cloning to emulate one’s likeness. Deepfakes of people in power can spread misinformation to the unaware audience.
- Discrimination. As I explained above, using biased data for predictive policing is pure evil.
- Social engineering, phishing, and cyberattacks. Self-learning AI that can convince people to hand over information or money is extremely dangerous. It can be hard for people to tell whether they’re chatting with a bank representative or a fraudulent AI model.
Final Thoughts
Ultimately, AI isn’t the problem — evil humans that want to abuse the powerful technology are.
AI doesn’t have any moral standards and can’t feel emotions, which means it can’t be evil. It also can’t be selfish, angry, vengeful, etc.
It’s up to AI labs like OpenAI and Google to put fail-safes into place to prevent their tool from getting misused.
Sources
- NYU: Predictive Policing is Tainted by “Dirty Data,” Study Finds
- Time: The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter
- ZDNet: Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create
- The Guardian: ‘I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter
- Gear Rice: Bing’s AI went crazy months ago and Microsoft knew it
- NYTimes: A Conversation With Bing’s Chatbot Left Me Deeply Unsettled
- Republic World: What Is ChaosGPT, The Evil Twin Of AutoGPT On A Quest To Wipe Out Humanity?
- Tech Target: artificial general intelligence (AGI)
- The Next Web: Stanford AI experts call BS on claims that Google’s LaMDA chatbot is sentient
- Slate: Should ChatGPT Be Used to Write Wikipedia Articles?
- NYU: Predictive Policing is Tainted by “Dirty Data,” Study Finds
- SpringerLink: Fully Autonomous AI
- BBC: Google engineer says Lamda AI system may have its own feelings
- Wikipedia: ELIZA effect
Recent Posts
Enhancing E-commerce With Personalized Product Recommendation
Introduction to AI-Driven Personalized Product Recommendations In today's competitive e-commerce landscape, businesses are constantly looking for ways to enhance customer experiences and drive...
In this blog post, I delve into how AI is used in customer analytics and user behavior analysis, uncovering the potential of customer segmentation, personalization, product recommendations, and...
