Consciousness. What is it. Where does it come from. Why do we have it. Is there even such a thing as an immaterial "consciousness" separate from our bodies, our brain? 

These questions aren't new; Descartes and co weren't the first to ask them and won't be the last. "I think, therefore I am" has been upheld, challenged, dismissed, and dis/agreed upon from all angles of prismatic-like discourse from philosophy to psychology to religious studies. There won't ever be an answer, so at this point, the search to understand consciousness has dwindled in favor of more time-pressing, society-benefitting problems. 

What I find so interesting about consciousness--its source, primarily, but also its mechanisms-- is that we've been arguing about it for so long, yet our rate of understanding it is so far below our understanding of nearly every other phemonema. It can be argued that the problem of consciousness is so far out in the most obscure corner of psychology, the most subjective science out there, so this shouldn't be a surprise. 

But I still think we should study it. And a small minority of scientists have been, and are, continuing to do so. 

The notion of consciousness not only showing properties but also being a property is propelled by David Chalmers, who argues for the existence of conscious experience as its own physical entity. According to Chalmers (1991), everything has consciousness, just on certain levels (humans have a higher level of consciousness than ants, for example, which have a higher level of consciousness than bacteria which have a higher level than molecules, etc). If this is true, then some might say that the property argument discredits the Hard Problem’s existence, because if, say, protons—structurally and functionally identical to one another in every way—display a certain “level” of consciousness, then how do they differ in their subjective gateways to experience, the very essence of consciousness? The heart of the Hard Problem lies in the very subjective nature of consciousness. Any attempt to objectify consciousness therefore strips its idiosyncrasy to a physical, “easy problem” reduction. 

First, it must be established that the Hard Problem is separate from “easy problems.” An opponent to this viewpoint, Daniel Dennett, cites the empirical example of the optical illusion of change blindness as evidence for experience being simply a compilation of all of what our senses are communicating (Dennett, 2007). At a TED conference in 2007, Dennett showed a video of a surgery behind color-changing squares —the audience did not notice the color changing despite being told, “watch carefully” (Dennett, 2007). Dennett suggests that our failure to detect background changes within a scenery montage shows that experience is shaped by what we are or are not attending to, which is an explicable, “easy problem” mental process. However, Dennett fails to explain why the audience chose to focus on the scene and not the many colorful squares, both quite salient stimuli. Therefore, change blindness only reinforces the hard problem because the members of the audience clearly felt experiential connections to the surgery video, leading them to watch it. Also, Dennett’s usage of change blindness to attempt to discredit the Hard Problem shows his misinterpretation of the Hard Problem itself, and therefore is an inapplicable explanation. Dennett claims that we overcomplicate consciousness in a way analogous to our over-estimation of visual processes (Dennett, 2007). However, failing to detect visual change is not the Hard Problem; it is just a failure on the part of the “easy problems” which can be explained in normal visually processed steps.

Dennett’s example does, however, raise further questions concerning the contradictory nature of the Hard Problem debate. Despite the individual differences in experiential reactions (e.g., various audience members may have felt indifferent or perturbed or curious about the surgery), the reactions were all based from the same stimulus that led to the same displayed behavior: watching the surgery instead of the squares. Whatever the vehicle behind conscious experiential processes between individuals happens to be, it appears to fluidly operate at all of the mind’s hierarchical modules outlined Jerry Fodor (1985): reflexes, perception, and cognition. Is it possible, then, for consciousness to have both inferential and non-inferential, and both encapsulated and non-encapsulated properties? The audience members reflexively followed the schema of “going to a presentation” whether or not were are aware of this conformity. Still, their non-encapsulated beliefs and unique experiences of the world governed their interpretation of the presentation as a whole. This possibility of consciousness showing modular properties, while doing nothing to solve the Hard Problem, could at least partially explain why the endlessly contradictive solution has not been sourced.

An answer could lie in Gestalt theory: the composition of consciousness as a property, in a sentient being or system, must be more than the sum of its individual, conscious particles (a gap-based phenomenon eerily similar to the structure of the Hard Problem dilemma itself). Instead of attempting to identify the physically immaterial (at least for scientific purposes), empirical methods can indirectly study the observed correlates of consciousness. Guilio Tononi bases his Integrated Information Theory on the premise that the amount of consciousness in a being is correlated with the amount of integrated information generated by the system (Tononi, 2008). IIT allows this end product of all our sensory inputs (the integrated information, phi) to be measured empirically via an EEG-style electrode cap (Lang 2013).

Of course, one must question the accuracy of Tononi’s projected phi value—is it really possible to take into account every single possible sensory input of consciousness?  How encompassing (and accurate) are the TMS-EEG technologies that supposedly approximate the amount of integrated information? Once again, the question returns to the possibility that some information cannot be captured physically. Additionally, the notion that a conscious system must have a certain level of integrated information to be defined as such is at odds with Chalmers’s panpsychism. If Chalmers is correct, and that bricks and pine needles possess consciousness which Tononi basically defines as integrated information, then what about the whole building and the whole pine tree? Where does one integrated system end and the other begin? Yet some, such as Christof Koch, chief scientific officer at the Allen institute for Brain Science in Seattle, are both IIT fans and panpsychist aficionados who believe the two theories can be bridged. Koch cites the postulation that integrated information only exists at “local maxima” of the physical world, and that as long as the “causal relations among the circuit elements . . . give rise to integrated information, the system will feel like something” (Koch 2014). Thus, only some things, things that must be integrated according to a mysterious natural algorithm, have a phi value greater than zero. The real hard problem is: What then defines these causal relations?

Tononi’s technology does not explain why or how conscious experience occurs, yet its purpose—a proposed method to end anesthesia awareness—shows huge promise in medical advancement (Lang 2013). IIT proves that, while it may never be possible to explain the origin of the Hard Problem explicitly, studying consciousness can lead to practical and beneficial applications for society.

Although arrows fired to the Hard Problem solution have missed the target, it is clear that the gaps between where they land and the true solution, whatever that cause might be, provide insight. As theorists and scientists continue to follow the trail left by these gaps, the overall outlook on consciousness has shifted from “Where does subjective experience come from?” to “What are the laws that make experience different?” It may not matter where qualia come from, but if scientists continue to further postulate the laws governing integrated information, then consciousness will become closer to being a property in our collective reality. 


Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of consciousness studies, 2(3), 200-219

Dennett, D. (1991). Consciousness Explained. New York: Little, Brown and Company. ISBN 0140128670.

Dennett, D. (2007, May 3). The illusion of consciousness.

TED Talks. Podcast retrieved from

Fodor, Jerry A. (1985). Précis of The Modularity of the Mind. The Behavioral and Brain Sciences, 8(1), 1-42

Koch, Christof. (January 2014). Ubiquitous Minds. Scientific American. Retrieved from

Lang, Joshua. (January 2013). Awakening. The Atlantic. Retrieved from

Tononi, Giulio. (2008). Consciousness as integrated information: a provisional manifesto. Biological Bulletin, 215(3), 216-242.