top of page
Writer's pictureNourhan Khaled

I, Robot and Artificial Stupidity

Updated: May 30, 2022

The robot overlords are coming, artificial intelligence (AI) is getting smarter and smarter, ‘the end is near!’ Or at least it is if you take your daily dose of news from Hollywood headlines.


In the movie I, Robot (2004), U.S. Robotics created a new generation of highly intelligent robots who nearly succeeded at creating a robotic AI revolution to overrule mankind. How convenient is it that eventually Detective Spooner, a stubborn AI skeptic played by Will Smith, saves humanity from the so-called intelligent robots?


This narrative recurs in most AI dystopia-based sci-fi movies, which creates some sort of AI-aversion in the general population. Get one spot-on YouTube recommendation and you won’t stop hearing it from conspiracy theorists, sci-fi enthusiasts, and your parents about how AI is taking over and, before we know it, humans will no longer be in control. While there is a great bit of truth in how AI has revolutionized our daily lives and will probably continue to do so, we can take solace in knowing that the AI dystopia may still be further away than commonly predicted.


One way to combat AI paranoia is to merely contemplate not the ways in which AI fascinates us, but rather how it continues to strike us with its stupidity. While I personally find AI stupidity a greater cause for danger and concern, that’s a topic for another time.



AI may have surpassed humans in a variety of tasks, but let’s face it, it’s not going to take over the world by beating humans at chess - that is if humans were smart enough not to bargain the fate of their world on the outcome of a chess match. In fact, roboticist and AI scientist Hans Moravec framed it very accurately back in 1988 when he wrote the famous words of what is now known as Moravec’s paradox:

"It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" - Hans Moravec (1988)

We know how to program specific task-based intelligence, like programming machines to play checkers, return a ping pong ball, and generate sentences. However, an uprising would probably require something more sophisticated, something we haven’t learned how to program “yet”; it would require general intelligence (AGI), a combination of an array of skills to achieve compound tasks. You can read more about the difference between AI and AGI here.


Limitations of A.I.

There are so many aspects in which AI is yet to develop. Recently, Forbes presented a non-exhaustive list of what AI still can’t do – in what follows, I review some of the listed limitations.


Common Sense


Common sense, a skill humans have effortlessly mastered, plays a role in how we interact with the world around us. Humans derive their common sense from embodied (bodily) as well as embedded (social, political, environmental…etc) facts in their world. For example, we know that leaving a huge tip at a restaurant corresponds to how satisfied you were, that animals don’t drive cars, or even how it’s almost blasphemous to have pineapples on pizza. It’s a challenging feat to pass on common sense to AI because as common-sense reasoning and AI expert Leora Morgenstern said: “What you learn when you’re two or four years old, you don’t really ever put down in a book.”



There definitely have been many attempts to wire “commonsense knowledge” into AI, including a proposal by John Macarthy, the Godfather of AI himself, however it has proven to be a very difficult problem to formalize and tackle. Why does AI need common sense, you may ask? Well, this can be answered by the example of the AI that diagnosed a car with measles:

“AI: Are there any spots on the body? User: Yes. AI: What colour spots? User: Reddish brown AI: Are there more spots on the trunk than elsewhere? User: No. Program: The patient has measles.”

Still from I, Robot dir. Alex Proyas


Contrast this naive real-life example with the culprit of “I, Robot” which, through its intelligence, deduced that humans are far too destructive for their own good and decided to override the robots’ own programming to disobey humans for the greater good.


Learn as you go


Another limitation of AI is its inability to adapt on the fly. Many advancements in AI are powered by some form of supervised learning, where the AI is provided with the correct answer for a certain task many times sufficiently until it can solve that task on its own. The caveat here is that the AI becomes really good at things it has been trained to see; however, when it is provided with a totally new scenario or set of conditions, it may fail catastrophically. This sort of knowledge evolution stands between the AI we have today and truly autonomous AI. There is a lot of research going into that area, but it is yet to be realized.


Ethical Reasoning


Humans have it hard enough to define what is ethical and what isn’t, and we have the famous trolley problem to prove that, so is it fair to expect AI to have ethical reasoning? Involvement in the ethics of AI has been on the rise the more AI-driven decisions are being integrated into our everyday lives. I will shortly give some examples of ethical concerns but, since this topic deserves research of its own, you can check Brian Christian’s book “The AI Alignment Problem”, in which he extensively discusses the problem of ethics in AI.


Tales of AI Failures

Now that we’ve brushed over some of the limitations of AI, let us reminisce about some examples of failed but not forgotten AI.


Microsoft’s Chatbot Tay


In 2016, Microsoft released an A.I. powered chatbot called Tay on twitter to engage with the masses. Tay’s creators hoped that she/it would learn the language of the internet, integrate into the fabric of social networks and have high-intellect conversations on any topic, with the help of Machine Learning and Natural Language Processing techniques. It only took a few hours for things to go completely wrong with Tay. It started tweeting all sorts of racist, offensive, hateful tweets until Microsoft were forced to suspend it.


Now it is true that Tay was only repeating words she had recently encountered - which some users exploited to equip her with the offensive language - that still didn’t sit well with people. We spoke about unethical biases in AI algorithms in our other article here, but while that is only one part of the issue, its counterpart lies in the lack of common sense and ethical reasoning in AI. Had common sense and ethics been somehow mapped into Tay, it would know that the topics it so off-handedly addressed were sensitive topics, to say the least.


Autonomous Vehicles

Fully self-driving cars (FSD) are one of the things that I’m really rooting for. However, it seems that every crash or incident that involves the use of self-driving cars chips away from the hopes that we can ever reach the release of fully autonomous vehicles anytime soon. Whether it’s Tesla, Uber or Waymo, they’ve all had their fair share of accidents. And though I wouldn’t go as far as to say that fully autonomous vehicles are going under the ‘failures’ shelf, it certainly hasn’t been a smooth ride to success.


While we also experience and sometimes even actively cause car crashes on our own, what sets FSD apart is the dilemma of the moral choices it would undertake. Should it crash into a passerby or the fatal concrete ahead of it? Given that this morality is in itself variable across countries and cultures, as this article in Nature magazine explains, the challenge of adopting FSDs is even harder. This makes us circle back to the aforementioned limitation of injecting ethics into decision-making AIs. If that equation - or whatever it will be - is cracked, I would personally be much more comfortable with self-driving cars on the road.



Fooling Deep Learning

Fig 1: Right image shows real graffiti, left image shows stickers used to confuse AI.
Fig 2: Sticker on hat confuses facial recognition system.

Deep Learning lies at the heart of a lot of successful A.I. powered applications, from healthcare to entertainment, deep learning applications keep blowing our minds. But how robust are they really? Researchers have been presenting different ways to sabotage/interfere with AI by introducing small perturbations to its inputs.


For example, one paper demonstrated how placing stickers on a stop sign can fool the AI to misread it [Fig 1]. Another paper showed that placing a printed-pattern sticker on a hat greatly confuses facial recognition systems [Fig 2]. Researchers have also demonstrated how speech recognition systems can also be fooled when certain white noise patterns are added onto the audio.



These examples show how brittle these AI systems really are. If those systems had the ability to update their knowledge base and learn on the go, they might not easily fall into these pitfalls.


Conclusion


The events of “I, Robot” take place in 2035, almost a decade from now. In spite of recent AI breakthroughs, it’s clear that the current state of AI is laughably far away from sci-fi’s AGI depictions, which are considered highly intelligent by human standards. We are yet to wire in common sense, ethical reasoning, skill generalization and adaptation all into one machine, not to mention the other counterparts which constitute complex intelligence. Only when we achieve that can we really start sweating about AI overruling mankind. But hey, it’s still 2021, let’s see how well this ages.






 


References:


- Cisse, M. (2017, July 17). Houdini: Fooling Deep Structured Prediction Models. ArXiv.Org. https://arxiv.org/abs/1707.05373

- Eykholt, K. (2017, July 27). Robust Physical-World Attacks on Deep Learning Models. ArXiv.Org. https://arxiv.org/abs/1707.08945

- Komkov, S. (2019, August 23). AdvHat: Real-world adversarial attack on ArcFace Face ID system. ArXiv.Org. https://arxiv.org/abs/1908.08705

- Darlington, K. (2020, February 25). Common Sense Knowledge, Crucial for the Success of AI Systems. OpenMind. https://www.bbvaopenmind.com/en/technology/artificial-intelligence/common-sense-knowledge-crucial-for-the-success-of-ai-systems/

- Toews, R. (2021, June 1). What Artificial Intelligence Still Can’t Do. Forbes. https://www.forbes.com/sites/robtoews/2021/06/01/what-artificial-intelligence-still-cant-do/

- Suwandi, R. C. (2021, August 18). Why is AI So Smart and Yet So Dumb? - Towards Data Science. Medium. https://towardsdatascience.com/why-ai-is-so-smart-and-yet-so-dumb-c156cc87fafa

- Schwartz, O. (2021, September 30). In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation. IEEE Spectrum. https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation


278 views0 comments

Comments


bottom of page