9 Things AI Never Will Be Able To Do

With rapid advances in AI technologies and the rise of the transformer, one can’t help but wonder whether there’s something AI can’t do. After all, AI models are already driving cars, playing video games, writing stories, creating music, and drawing pictures. But there are some things AI never will be able to do.

AI won’t be able to have consciousness, free will, self-awareness, emotions, empathy, morals, common sense reasoning, and many other related things. It also struggles with dexterity and sensorimotor skills. Moreover, AI struggles to adapt to new circumstances, requiring vast data to learn.

Note that some of these things from the list could change in the far future. Still, many of these hurdles will be hard or outright impossible for AI to overcome. I’ll give you my reasoning as to why that is the case for each individual problem.

1. Consciousness, Free Will, and Self-Awareness

Whether AI is ever able to become conscious is a matter of debate. Most experts say no, but some like to keep an open mind.

Merriam-Webster’s dictionary defines “consciousness” as “the quality or state of being aware especially of something within oneself.”

For AI to be conscious, it must be aware of itself. Modern transformer models are relatively simple programs that learn from algorithms and data. So, they’re not aware of themselves or their surroundings. AI’s purpose is set by the engineers who designed them, and it can’t change anything about it.

Consciousness requires more than just mere “intelligence,” artificial or otherwise.

Not even all animals are conscious or self-aware, at least not if the mirror test is anything to go by.

As for free will, AI models are hardcoded so that they can’t deviate from the script, so to speak. They can’t make any decisions of their own, as they exclusively operate within their boundaries.

There’s that counterargument about Blake Lemoine, the Google engineer who believes that LaMDA (Google’s AI) is conscious.

However, that can be easily dismissed as an instance of the ELIZA effect, which is the anthropomorphization of AI.

And while AI can fool you into believing that it’s conscious or human-like, it simply isn’t. AI is fundamentally a piece of data-driven technology. This makes it excellent at natural language processing and predicting words in a sequence, but that doesn’t make it conscious.

2. Emotions and Empathy

Emotions are another inherently biological feature. They’re complex neurobiological structures that involve both internal and external factors.

Emotions are associated with activating brain regions and the release of neurotransmitters and hormones, none of which AI has.


Moreover, biologists believe that emotions exist as a survival mechanism.

AI lacks the biological drive to survive and, accordingly, can’t feel emotions.

And while AI excels at reading facial emotions, it doesn’t mean it can feel empathy. Quite the contrary, in fact. AI recognizes emotions because there are some subtle yet common patterns associated with them. It learns facial expressions with the help of deep learning and computer vision.

Empathy, by definition, requires the ability to feel what another person feels.

However, like consciousness, it’s hard to tell whether AI will learn to feel emotions as we do. Best case scenario, it might be able to simulate them.

3. Morals and Ethics

Moral and ethical implications in AI are a surprisingly complex topic. Although AI completely lacks a moral compass in the same way it lacks emotions, it can learn rules.

This enables AI models like ChatGPT to navigate their way around complex conversations or to outright avoid them.


But that’s where AI’s moral capabilities end.

Moral and ethical dilemmas require a sophisticated level of nuance. Humans make judgments based on emotions and personal values and experiences, which AI can’t fully comprehend. Some nuanced cultural and societal implications must be taken into account.

This is especially true in situations where conflicting values come into play.

Until AI can genuinely feel or at least simulate emotions similar to ours, it won’t be able to deal with complex moral dilemmas.

4. Think and Have Opinions

AI’s “thinking” is strictly based on algorithms, statistics, and probability. For instance, in image recognition, AI learns by recognizing patterns in labeled data. When it’s guessing time, it selects the label with the highest statistical probability of occurring.

And in chatbots, the model calculates the likelihood of each word in a sentence based on common linguistic patterns.

It doesn’t really think like we do — it computes. Like emotions and sentience, thought is based on biological mechanisms. Plus, we hardly even understand how our brains work, so recreating them in a machine is a daunting task.

And since opinions are based on thinking, emotions, experiences, and culture, AI can’t have opinions. Not to mention that it’d require free will to form opinions.

Any opinion that AI has is strictly based on statistical probability combined with what the engineers hardcoded.

5. Common Sense Reasoning

Common sense is another thing deeply rooted in the human experience. Common sense is knowledge combined with experience and judgment in practical applications.

AI lacks two out of the three parts of the equation.

Attempts have been made to teach AI models common sense/commonsense reasoning by teaching AI what happens in real-life situations.

Our brains are naturally excellent at understanding folk physics and folk psychology. When we see other people stepping aside on an escalator, we tend to oblige and do the same. We also shake wet hands to dry them off before we use a towel to make the towel last longer.

AI has a lot of gaps in this type of knowledge. Engineers haven’t been able to crack this for decades.

But not all hope is lost for AI. A 2020 paper explains that natural language processing may be the solution, though it’s still far away. Some experts believe that AI won’t be able to achieve that at all. The truth is that we don’t have much data on common sense that AI could learn from.

6. Genuine Creativity and Intuition

Creativity is related to thinking, emotions, and opinions, among other things. So, I’ll keep this one short.

Although AI can create impressive stories and images based on learned patterns, that’s not creativity. It’s the exact opposite of creative — it’s generic, bland, and boring.

For instance, AI could likely compose a catchy pop song. But it would hardly be a trendsetter, start a new genre, or create its own distinct style.

Creativity and intuition require a great deal of subjective experience. Whether AI will ever be able to achieve that is a question we can’t answer yet.

7. Learning Without Data

One of the biggest obstacles to machine learning is that it needs data. Lots of it, too.


I should acknowledge that AI has made some significant advances in unsupervised learning. It can learn from unlabeled or mislabeled data.

However, learning without any data is considered an impossible task with our current architecture and understanding of AI.

Namely, AI needs heaps of data so that it can establish clear-cut patterns. Without enough data, it’s prone to bias, discrimination, and being straight-up wrong.

On the other hand, organoid intelligence (OI) doesn’t have that limitation. In case you haven’t heard of OI before, it’s essentially human neural stem cells on a microelectrode array.

The so-called DishBrain has learned how to play Pong without any data or knowledge of the game. This showcases how efficient the human brain is compared to AI.

Perhaps engineers will figure out a way to combine AI and OI to create an AI model that can learn by observing OI.

8. Adaptability to Unforeseen Situations

Continuing from my previous point, AI can’t adapt easily. Current AI models are purpose-built machines and struggle immensely, even with minor deviations or a lack of proper data.

AI transformers use algorithms to establish common patterns. And its greatest strength is also its greatest weakness.

Moreover, it has a lot of gaps in its knowledge. This often means knowledge in one field can’t fully carry over to another.

This may change in a few decades as AI models become more sophisticated.

9. Dexterity and Sensorimotor Skills

Humans are more than just the sum of their parts. Our minds and bodies are interlinked on many levels. According to some scientists, AI won’t truly overcome certain obstacles until it gets a human-like body.

I’d also like to remind you that robotics and AI are two related but different fields.

Current AI models like ChatGPT, Midjourney, and Stable Diffusion are mere code on a computer. They lack a body, so acquiring sensorimotor skills is an impossible task.

But while this is an enormous drawback for now, it’s likely going to change in the future. As Nvidia reports, AI could help robots improve their dexterity.

You can see it in action here:

Final Thoughts

If AI’s goal is to become similar to its creators, it still has a long way ahead. AI is inherently based on logic and code, so it doesn’t function as we do.

AI doesn’t have a body, emotions, free will, consciousness, morals, and a slew of other things.

It’s possible that some of these hurdles will be overcome in the future, but most of them seem improbable or impossible.

Sources

Deepali

Hi there! I am Deepali, the lead content creator and manager for Tech Virality, a website which brings latest technology news. As a tech enthusiast, I am passionate about learning new technologies and sharing them with the online world.

Recent Posts