History of AI: A Perspective
Updated: 4 days ago
Last September, Egypt advanced 55 global centers in the Government Artificial Intelligence (AI) Readiness Index 2020—an index that measures governments' readiness to implement AI in the delivery of all public services to their citizens, like healthcare, education, and transportation. Shortly thereafter, on July 4th, 2021, the Egyptian Ministry of Communications and Information Technology (MCIT) announced the launch of a special platform for artificial intelligence at www.ai.gov.eg as the first official governmental portal for all-things AI.
As the country takes bold steps towards creating an AI Industry in Egypt that includes “the development of skills, technology, ecosystem, infrastructure, and governance mechanisms to ensure its sustainability and competitiveness,” here at techQualia we are committed to facilitating that by helping you understand the state of AI in the world and, more specifically, in Egypt.
For that reason, we have decided to take a look back at the history of AI and offer an informative perspective on where we are now.
Today, AI and machine learning are often perceived by the general public as relatively new technologies that have only recently moved from the conceptual stage to that of actual application. This belief is both right and wrong: it is true that the creation of stand-alone AI products, especially at a commercial scale, only happened in the last decade or two. However, the increasing prevalence of AI in our everyday life is based not on advances in AI technologies themselves, but rather in the technologies that surround them—namely, data generation and processing power.
In reality, when it comes to the guiding conceptual assumptions and approaches to manifest them, the AI of today is only a continuation of advances achieved over the past two—in some cases, even three—decades, and conceptual approaches that date back to the 1950s. In this article, I will develop that claim by offering an overview of the history of AI.
AI, BACK THEN AND TODAY:
Currently, AI’s biggest success is “Deep Learning”. Deep Learning refers to the complex (hence, “deep”) neural networks currently being engineered using backpropagation: a supervised learning mechanism. The idea of the deep neural networks (DNN) is to mimic and model the structure of the brain in electronic form, “whereby artificial neurons draw their own connections during a self-learning process.” In backpropagation, labels or “weights” are used to represent a photo or voice within a brain-like neural layer in a pre-categorized manner. The labels are then continually adjusted and readjusted, layer by layer, by the human trainer until the network can perform the required intelligent task or function successfully.
However, that conceptual approach is far from novel. The neural net method was actually first introduced in the 1950s and popularized in the 1990s. This idea that creating AI by modeling the brain's learning power rather than the mind's symbolic representation of the world drew its inspiration from neuroscience generally, and the work of D.O. Hebb in 1949 particularly. Against the view that intelligent behavior could be formalized symbolically, Hebb suggested that a mass of neurons could learn if the “simultaneous excitation of neuron A and neuron B increased the strength of the connection between them” (Dreyfus, 1992).
Later, Frank Rosenblatt, creator of the perceptron (1958), argued that instead of symbolic AI, AI should be attempting to “automate the procedures by which a network of neurons learns to discriminate patterns and respond appropriately.” This approach departs from the classic, symbolic AI insofar as it does not depend on a rationalistic representation of a world that is structured according to fixed, rational rules; rather, it views the mind as an enormous network, composed of myriads of parallel processing “mental agents” which allow the network to react to arbitrarily complex problems (Minsky, 1985).
The symbolic AI approach that artificial neural networks emerged in opposition of is traditionally referred to as GOFAI: Good Old-Fashioned Artificial Intelligence. This type of AI development was conceptually based on the idea that the human mind is similar to an information-processing unit that operates based on mere rules. For quite some time, GOFAI dominated the AI world until its many failures prompted many to dismiss it and reconsider neural networks or “connectionist” approaches, which had been too difficult to successfully recreate when they were first introduced.
Thus, starting the 90s, the neural network approach of the 50s was revived and is still being used today. The difference today is that neural networks are used on an entirely different scale. Whereas in the 1990s it took days to train a neural net to recognize numbers on tens of thousands of examples with handwritten digits, today—thanks to the much more advanced data generation and processing power—DNNs can be trained on millions of subjects and objects, and backpropagation is used to train the net to recognize patterns and automate tasks. The technologies used, that is the “exponential growth of computational power combined with the massive accumulation of data,” is what revived the “connectionist idea” in AI: an approach that was already present in AI research since the 50s.
What caused the conceptual shift and is it invulnerable to critiques?
If you type in “History of Artificial Intelligence” (AI) on Google, you will find that the first automatically generated answer is an excerpt from ‘A Brief History of AI’ saying the following:
“The beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a conference [……] where the term "artificial intelligence" was coined”.
Ordinarily I would take this as an opportunity to prove (at length) that philosophy and AI have always been intertwined, but I will spare you this time. In this article, my interest resides only in offering a perspective on the history of AI that also sheds light on the inextricable link between theories of mind and AI development.
As mentioned earlier, the beginning phases of AI were marked by the attempt to replicate human cognition as though it is an information-processing unit that can be formalized. This symbolic approach in GOFAI was considered a failure by 1992 when the “people in AI research” as Philosopher Hubert Dreyfus calls them, “gradually, grudgingly, very slowly…came to realize that the situation was hopeless” when it came to programming what they called ‘common sense knowledge’.
Dreyfus, author of the famous books What Computers Can’t Do” (1972) and “What Computers Still Can’t Do” (1992), thought any GOFAI claims and hopes of success for “progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon” (Dreyfus, 1989).
He was also the first and only AI critic to argue for feed-forward neural networks, way before any of that had come into light in the AI world. It is for this reason that many consider his slashing critiques to be the catalyst for the conceptual shift in AI. That being said, many people in the field still argue that DNNs are still vulnerable to some of the critiques GOFAI fell prey to.
What were those exact conceptual and technical obstacles faced by GOFAI and in what way are they still prevalent in today’s AI development? Are DNNs a mere GOFAI redux? Do you know why some are saying AI today is heading into a new ‘winter’?
Studying the history of AI can help us answer those questions. It is customary to seek refuge in studying history whenever we are grappling with today’s complex questions and dilemmas. AI is no exception; studying its history allows us to develop better understandings of where we are and how to move forward—to examine the ways in which the past has shaped (and continues to shape) our present moment.
In this article, we only briefly overviewed the three main, overlapping phases of AI that the backward glance of a historian can (arbitrarily) pinpoint: the Symbolic Approach or GOFAI, the arrival of the Neural Net revolutionaries, and our present moment as marked by DNNs.
In our next article, we will take you through the hardware revolution behind AI evolution. But don’t worry, we will soon get to the critique of AI today and the question of whether we’re going through a third AI winter.
A full, exhaustive recapitulation of the history of AI warrants a study of its own that draws from insights in the field of computer science and mathematics but also contributions from neuroscience and philosophy of mind. Since that goes well beyond the scope of this article, please consider this a mere perspective on it rather than a comprehensive account of the history.
 Daniel Crevier said that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier." In Crevier 1993, p. 125.
 Some researches attribute the shift away from symbolic to “sub-symbolic” methods in connectionism (which attempt to approach intelligence without specific representations of knowledge) directly to Dreyfus. See Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4.
Clemens Apprich. 2018. “Secret Agents: A Psychoanalytic Critique of Artificial Intelligence and Machine Learning,” Digital Culture & Society, vol.4:1 29-44.
Dreyfus, Hubert L. 1972. What computers can't do: a critique of artificial reason. New York: Harper & Row.
Dreyfus, Hubert L., 1992. What Computers Still Can’t Do : A Critique of Artificial Reason. Cambridge, Mass: The MIT Press
Dreyfus, Hubert L., 2002. “Intelligence without representation – Merleau-Ponty's critique of mental representation. The relevance of phenomenology to scientific explanation.” Phenomenology and the Cognitive Sciences 1, 367–383.
Gonzalez, Wenceslao J., 2017, “From Intelligence to Rationality of Minds and Machines in Contemporary Society: The Sciences of Design and the Role of Information,” Minds & Machines, 27:397–424.
Heaven, Douglas. 2019. “Deep Trouble for Deep Learning,” Nature 574, 163-166 doi: https://doi.org/10.1038/d41586-019-03013-5
Mitchell, Tom. 1997. Machine Learning. New York: McGraw Hill.