How Siri Works: Your Guide to Interactive Cognitive Computing
Have you ever wondered exactly how your virtual assistants (VAs) work? How do they respond to your queries, set your reminders, play your favourite music? When we asked Siri, she was just as elusive as you’d expect: she really couldn't say - but I can try.
So, how does she work? Does she have ‘a brain’? Will machines like Siri ever rise to the level of human intelligence? These pertinent questions will guide the majority of this article. I set out to explain the mechanisms that underpin and enable the functionality of cognitive systems like Siri, and outline what contemporary cognitive computing research has accomplished (spoiler: machines are not as fully cognitive as us just yet).
In order to reach fully-fledged machine consciousness, it is important to replicate the architecture of human brains and loosely model the machine on human neural networks. Whether or not this is entirely possible has led to strong critiques and even stronger hopes for success.
Developing such a cognitive system requires various artificial intelligence (AI) technologies that are used to adapt and make sense of information rapidly. These technologies include machine learning (ML), data mining, pattern recognition, and natural language processing (NLP), in order to create an artificial neural network (ANN).
The brain of a cognitive system is the neural network. The artificial neural network mimics the central nervous system of humans, allowing the computer to learn and make decisions in a humanlike manner. ANNs behave like interconnected brain cells, constantly communicating with one another and informing outputs accordingly. Generally, cognitive computing is used to assist humans in decision-making processes.
Data is the most important factor that enables a neural network to mimic a human brain. To make use of the data they are fed, ANNs employ many layers of mathematical computation (Fig 1.). The input layer receives the information and data through which the neural network learns about the outside world. This data then passes through multiple hidden layers, where it is transformed into actionable information that the output layer can utilize.
In most cases, neural networks are fully connected from one layer to the other, and these connections are all weighted; the higher the weight, the greater the influence one unit has on another. The neural network continuously learns and relearns as information is passed from one layer to the next - similarly to the human brain, where information is constantly being transmitted and processed by neurons.
Key features of a cognitive system
In line with the human-brain analogy, developing a cognitive computing solution that is accurate and efficient requires meeting the most basic standards of human cognition. While human cognition is varied and comprises a wide set of principles and characteristics, there are key tenants that make up the core of these standards. The following are agreed upon characteristics of cognition that are built into cognitive computing systems:
1- Adaptive: Systems must learn to adapt as information and data changes and evolves. They are programmed to feed on dynamic data in real-time, and adjust their computation accordingly.
2- Iterative and Stateful: Systems must be able to define a problem by finding new sources of data if a challenge is ambiguous or unclear. This problem-solving feature is programmed to be iterative, with the system remembering the solution in order to apply it to future challenges.
3- Interactive: Elements in the system must interact bi-directionally. The system should be able to understand human input, and provide output using NLP and Deep Learning algorithms.
4- Contextual: Systems must be able to identify, contextualize, and extract important elements when analysing challenges such as meaning, syntax, regulation and profile.
Developing systems that utilize the features above alongside the correct data brings cognitive computing systems closer to mimicking the cognitive functions of the human brain. Although cognitive systems are constantly evolving, they already play a crucial role in our everyday life; they are used in customer targeting, VAs, autonomous vehicles, and fraud detection, to name a few.
Examples of Current Cognitive Systems
Presently, the skills achieved by cognitive systems (such as voice, text, and visual recognition, clustering and classification, and similarity matching) make for impeccable VAs. Thus, VAs like Siri and Alexa, are excellent examples of the application of cognitive computing. They can guide, learn, and start human-like conversations. They work by gathering data whenever we speak; the data is then passed to neural networks for the system to learn and develop.
This allows VAs to mimic and facilitate ‘basic’ human behavior like tell jokes, recommend music, set your alarm for you, or control your smart home. VAs, for the most part, have been widely successful; so much so that the global market for digital assistants amounts to $3.6 billion, and it is expected to increase up to $73.2 billion by 2030.
Spatial Cognitive Evaluation and Training
An important field of research in cognitive computing and AI is Spatial Cognitive Evaluation and Training (SCET). Mainly used as a medical diagnostic tool for the rehabilitation of patients with mild cognitive impairments, SCET is the process through which a computer system is able to to understand its spatial surroundings and position relative to other objects (Zhou et al.). This process involves an understanding of the concepts of direction, distance, and location, thereby allowing a computer system to be cognizant of its surroundings in much the same way human beings are.
A system such as this is implemented through the deployment of spatial agents. Used to simulate human behaviour in open environments, spatial agents are operationalized through relevant neural network architectures. The agent, which is also considered a system, has visual and audio consciousness within virtual spatial environments.
The advancement of these types of SCET systems - and their application within the field of cognitive computing - has the potential to bring us a step closer to full cognitive computing awareness.
Widespread adoption (and limitations) of Cognitive Computing
The adoption of cognitive computing has many advantages such as improving customer engagement, service, and experience. Based on the type of AI adopted in the workplace, it can also be beneficial for employees and co-workers as it enhances productivity and quality of work by having cognitive machines and systems aid employees for better and faster results. Aside from the augmented workforce, cognitive systems can also help in predictive asset maintenance, automated replenishment, and accurate data analysis.
However, the usage of cognitive computing is not without its disadvantages and limitations. For example, because a lot of data is needed to build a cognitive system, the security of the data is sometimes compromised when a data leak occurs. This was particularly brought to light earlier this year when Facebook was accused of a scandalous data breach. Change management in organizations where the systematic transformation is tech-related is also a challenge when cognitive computing is at play; this is because humans are usually concerned about being replaced by AI.
This article only covered the basics of cognitive computing, provided a broad definition of cognitive systems, and how they can be used to the advantage of businesses. The cognitive computing market is expected to reach a net worth of $77.5 billion by 2025, reflecting its growth and rapid development. It is predicted that cognitive computing will be transforming businesses, upheaving their operations and enhancing their overall performance.
A. Maedche. A. Benlian. B. Berger. H. Gimpel. 2019. “AI-based Digital Assistants: Opportunities, Threats, and Research Perspectives”.
Deloitte. 2017. “Cognitive Computing in technology, media, and telecom”.
P. Langley. 2017. “Interactive Cognitive Systems and Social Intelligence”.
X. Zhang. S. Zhang. M. Sheng. 2019. “Internet of Things Meets Brain-Computer Interface: A Unified Deep Learning Framework Enabling Human-Thing Cognitive Interactivity”. Online.