top of page
  • Aliah Yacoub

Consciousness 102: Can Science Explain Everything?

Can science explain consciousness? If it can, then super-intelligent, conscious machines are just one clever hack away.



For a while now, we’ve been trying to find out whether machines can be like us.

Answering the question of the likelihood of AGI (i.e., entities that exhibit artificial general intelligence aka machines that think and act like humans) meant we first had to answer how and why humans think and act the way they do.


To do that, in our last article, we introduced you to the idea that our consciousness is made up of qualia that are completely subjective. As discussed, qualia are the properties of subjective experience: that what-it-is-like to be the subject of some kind of mental state. For example, when you’re in love and feel like “el hayah ba2a lonha bamby”.


With that overview out of the way, we can finally sink our teeth into the real deal and ask this: Can the total subjective aspect of the mind ever be sufficiently accounted for by the objective methods of science and how?


There are many questions that science is yet to fully answer about consciousness. For example: 1. What are the critical brain regions for consciousness? 2. What are the mechanisms of general anesthesia? 3. What is the self? 4. What determines experiences of volition and 'will'? 5. What is the function of consciousness? What are experiences for? 6. How rich is consciousness? 7. Are other animals conscious? 8. Are vegetative patients conscious?


In this article, I will recap two contemporary theories of consciousness that try to answer some of those questions.


Even though the question of consciousness is a hotly debated one with no universal consensus and getting at a complete theory of consciousness has proven unsuccessful, the following two theories have more supporters than others.


1) Chalmers’ Hard Problem of Consciousness


In his 1995 paper ‘The Puzzle of Conscious Experience’, David Chalmers proposes the existence of two kinds of problems concerning consciousness: the easy and the hard. With the ‘easy’ problems, which are just as challenging, Chalmers refers to questions like how we can discriminate between different visual stimuli, or how we integrate several sensory inputs into a coherent train of thoughts. These questions are challenging right now, but eventually the progress is neuroscience might yield answers.


According to Chalmers, those ‘easy’ questions are related to consciousness, but do not get at the heart of the ‘hard’ problem: why is it that these processes are accompanied by subjective experience? On that view, consciousness is an irreducible phenomenon posing a ‘hard problem’ as it cannot be functionally analyzed.



Using the laws of physics, we might eventually be able to explain the objective neural correlates of consciousness (NCC) and solve the easy problems but what about the hard one?


In line with the notion that physics provides a complete catalogue of the universe’s laws and fundamental features, Chalmers proposes considering consciousness a fundamental feature. A feature that, just like time, mass, and electromagnetic charge, cannot be explained but just exists and obeys certain laws. Using this fundamental feature, we could start to formulate laws of consciousness and bridge the gap between ‘how’ our consciousness works and ‘why’ it works the way it does.


This way, one could make a distinction between properties of consciousness on the macro- and microphysical level. Chalmers proposes the idea of ‘constitutive panpsychism’. According to this theory, our level of consciousness (i.e., macro-experience) is made up, partly of wholly, out of a sum of micro-experiences. Our macro-physical events, such as the utterances of our consciousness, are governed by the fundamental micro-physical properties that underlay this. These micro-physical properties would comply to the laws of consciousness, meeting the framework Chalmers has painted in his 1995 paper.


Albeit highly speculative, if successful, this theory could provide a paradigm in which the materialistic point of view on consciousness can prevail.


He says, “[…] there is no reason they [laws of consciousness] should not be strongly constrained to account accurately for our own first-person experiences, as well as the evidence from subjects’ reports.”


2) Tononi and Koch’s Integrated Information Theory

While Chalmers constituted his theory of panpsychism starting at the lowest fundamental level, Giulio Tononi and Christof Koch started theirs by looking at their own subjective experiences. In their Integrated Information Theory (IIT), they started by postulating different axioms that our conscious experiences are subjected to. The five axioms they came up with are intrinsic existence, composition, information, integration, and exclusion.

Consciousness has intrinsic existence because we have a first-person experience of it, hence it must exist. It also has a composition: we experience a structured world out there. We can distinguish a book from an apple and blue from red within the same experience. This experience is specific enough to provide us with information, the third axiom. Our conscious experience integrates the information provided into a coherent irreducible scene. When we see a blue book, we cannot reduce this to seeing a colorless book and the color blue. Lastly, our conscious experience is definite, meaning there is an exclusion of information based on the content and spatio-temporal grain of our experience. In other words, our experiences have a starting and ending point.


What does that mean?


Essentially, the IIT holds that the elements of a complex in a state are composed into higher order mechanisms that specify concepts. That is, when seeing the color red, certain elements in the brain (i.e., the complex in the state of seeing ‘red’) are active in a particular interconnected pattern (i.e., higher order mechanisms), specific for the sensory input (i.e., the concept). These concepts form a conceptual structure, otherwise known as a quale (singular for qualia!). A specific mental state connected to the sensory input from ‘out there’. This quale or conceptual structure is maximally irreducible intrinsically, meaning that no more or less parts of the conscious structure than necessary are used in order to create this quale. The size of the maximal irreducible conceptual structure that provides the quale is presented as Φmax. A system with a high Φmax is a system that can create more information due to the high level of information integration and vice versa.


According to the IIT, consciousness emerges when the integration of informative elements in the system achieves a state that expresses more information than those elements did independently.





Using the rule of thumb of varying sizes of Φmax, one can make several predictions and explanations of consciousness. Using this view, one would expect fading of consciousness during sleep, intoxication, or anesthesia to be the result of a decreasing Φmax. This prediction has been confirmed by research measuring electroencephalography and transcranial magnetic stimulation during deep sleep, general anesthesia, and other conditions of low consciousness. Additionally, IIT provides an explanation to why our cortex provides consciousness, but our cerebellum lacks the generation of consciousness. Our cerebellum consists out of more neurons than our cortex but is organized in smaller modules. This ultimately leads to a smaller Φmax compared to our cortex. Looking at these examples, IIT can provide a clear account for the organization of the neural correlates of consciousness.


Doing so, IIT is able to provide a framework to answering our easy problems, but does it fully answer the hard problem of subjective experience?


Why is this important for AI? A reminder:


AI development rests on reproducing human behavior and imitating its intelligence (and in some cases, surpassing it!). As you can recall, this is why Neural Networks are used in AI to mimic brain architecture.

Understanding how humans are capable of, for instance, classifying and integrating data or making decisions based on subjective qualia is extremely important. If we were to finally find out all the ways in which different parts of the brain, and possibly non-brainy parts, interact to give rise to consciousness, then we can come up with a complete theory of consciousness that can be transported to the AI world.


In other words, if we find out how our consciousness works, we can make conscious machines.

Do you think it is possible? Will the singularity happen?

Let us know what you think! You can reach us on FB, LI, and IG.





References:

  • Chalmers, D. J. The puzzle of conscious experience. (1995).

  • Chalmers, D. J. & David Chalmers, P. J. The Amherst Lecture in Philosophy Panpsychism and Panprotopsychism P the amherst lecture in philosophy Lecture 8, 2013 Panpsychism and Panprotopsychism Panpsychism and Panprotopsychism. http://www.amherstlecture.org/Website:http://www.amherstlecture.org/. (2013).

  • Rees, G., Kreiman, G. & Koch, C. Neural correlates of consciousness in humans. Nat Rev Neurosci3, 261–270 (2002). https://doi.org/10.1038/nrn783

  • Seth, Anil. 'Consciousness: Eight questions science must answer' (2012) The Guardian. https://www.theguardian.com/science/2012/mar/01/consciousness-eight-questions-science

208 views

Recent Posts

See All
bottom of page