on social media, the last week has been an interesting one for consciousness. from Anil Seth's pushback againat panpsychism, we see some interesting discussion coming out of it re: the legitimacy of consciousness research. and independently there's also been some relevant discussion by serious AI researchers too.
to recap, a certain pop media article claimed that panpsychism i.e. roughly the idea that simple creatures / plants may be conscious to some degree, is gaining academic credibility. i thought my response was a bit harsh, but one notable 'tweet' may be Adrian Owen's, which openly called panpsychism 'nonsense'. hurray, Adrian~
in truth, much as i agree, i do worry a bit that this may become a war between the disciplines. whereas in neuroscience panpsychism is generally written off, in philosophy some seriously people do take it seriously. some have now expressed the worry that they may get caught in the cross-fire.
that's a point that i think some scientists without my unhealthy level of philosophical bent may not appreciate initially. why would anyone be so crazy to think consciousness is everywhere? in a way, it all goes back to the issue of the hard problem. when it comes to qualia, i.e. the subjective, ineffable, qualitative, phenomenological aspects of conscious experiences, e.g. the redness of red when you see red.... that sort of thing is just not easy to model with an usual reverse engineering approach. when we write programs to do things like humans, we look through the lines of codes, where does it ever say that red has to look a certain qualitative way? why does it have to feel anything at all? it's not clear if there is something it is like to be the program. why isn't color just a wavelength, that is just different from the others? why does it have to feel this very specific way? if there is such a thing as subjective experiences for a program, to be represented by some numbers, the program will work just fine if we had swapped these numbers for red vs green. so long as such 'labels' are consitent, the program will work just fine. but our subjective experiences don't seem to work that way. and because it is so hard to pin down what may be the mechanisms / basis, some radical solutions like panpsychism are considered live possibilities by serious philosophers - maybe qualia is a fundamental property of physical stuff so we can't explain it in simpler mechanistic terms.
to some people, this problem about subjective qualia is a nonsensical problem. it's not even the kind of problem that scientists should be concerned with. to a certain extent, i sympathize. but at the same time, i think it is a legitimate thing - and maybe even important thing - for philosophers to ponder about. to some extent, their jobs are different in nature from ours. it's just good to keep the two businesses separate (in terms of evaluating what's right within each field).
but when philosophers working on these issues start to pretend that certain scientific theories support their worldview (e.g. panpsychism), then we get into trouble. as it turns out, the science itself doesn't support their views. it's just that some scientists endorse their views. but there's a world of difference between an empirically supported scientific theory, vs a theory endorsed by empirical scientists - the latter does not need to be a scientific theory at all. when philosophers cite such poor evidence as supporting their view, i fear it cheapens their philosophy, and they are asking for the backfiring.
so, all good. no need to worry about zombies for now (i.e. roughly creatures who functionally behave like us but have no subjective qualitative experiences). let's assume they don't exists - which is my tentative stance in our recent Science paper by the way, that qualia empirically correlates with certain neural computation in humans, so we should assume they do as such for now.
but there's another worry, from the opposite end. when we try to do this as a Hard Science, do we end up studying consciousness at all? or, are we just studying good old perception or attention, but we call it conscious perception just to sound sexy and cool?
this is the question brought up in a great post by @neurograce, which i find really thoughtful and fair. in truth, that's something i worry about a lot too. i thought i was to reply more directly onto @neurograce's blog, but i think the discussion on twitter more or less took care of it, with some useful input from Ken Miller too, and @neurograce kindly reflected it all on her blog - which i highly recommend.
in essence, the answer is: yes there is a meaningful work to be done, even if we aren't concerned with qualia and zombies and all that. it is just a basic neurobiological question why some processes in the brain are conscious, in the sense that we can talk/think about them, and why some processes are not. a science of the mind is incomplete if we can't say what makes the difference. however, the danger is that we need to make sure when we are talking about unconscious processes, we aren't just talking about feeble, weak, processes. for otherwise, we would just be equating consciousness with stregnth of perception. and in that case we can just talk about perception and do without the loaded c-word. there is something more to it than just strong perception though; there are very powerful forms of unconscious perception, as in the neurological phenomenon of blindsight. conscious perception just seems to be a different sort of process. mapping out the difference is meaningful work. we don't have a perfect solution as to how to do this yet. there's not yet a consensus; it's ongoing, so @neurograce's skepticism & critiques are very much welcome.
we can likewise frame this as a challenge to AI researchers: can we characterize different forms of processing, each of which somewhat similarly powerful, but some allows the system to reflect upon and report of them, and some are more opague to such introspection? like Yoshua Bengio (see this), i do think we may be getting there.... if we are careful not to confound it with other psychological phenomena such as attention, language, depth of processing etc. that is, we really need to make sure we are honing in on the critical mechanisms truly necessary and sufficient to make the difference between the conscious vs unconscious.
as the work becomes more rigorous, the concepts become better defined in cognitive/computational terms, can we just bypass the historical baggage, and avoid the c-word altogether? i think we shoulnd't, because there are already theories of consciousness that are explicitly as such, and some of them can be meaningfully arbitrated. it is odd for those doing this work to pretend we are not studying consciousness per se.
but above all, i also feel we can't sidestep it & pretend there isn't such a problem in the first place. we owe it to the rest of the field to fix this mess. people are going to talk about consciouenss and related issues. as we see in this recent debate between experts of the fear circuit like Michael Fanselow and Joe Ledoux (click the links to see their respective arguments), these are genuine problems, with real clinical and practical implications. between worrying about the metaphysical, lofty hard problems, vs going vanilla to avoid being too controversial, i fear we have not really done our jobs. amid all the pop media noise, we made it look like there are no serious scientific answers to these basic questions. it is time to do our parts.
to recap, a certain pop media article claimed that panpsychism i.e. roughly the idea that simple creatures / plants may be conscious to some degree, is gaining academic credibility. i thought my response was a bit harsh, but one notable 'tweet' may be Adrian Owen's, which openly called panpsychism 'nonsense'. hurray, Adrian~
in truth, much as i agree, i do worry a bit that this may become a war between the disciplines. whereas in neuroscience panpsychism is generally written off, in philosophy some seriously people do take it seriously. some have now expressed the worry that they may get caught in the cross-fire.
that's a point that i think some scientists without my unhealthy level of philosophical bent may not appreciate initially. why would anyone be so crazy to think consciousness is everywhere? in a way, it all goes back to the issue of the hard problem. when it comes to qualia, i.e. the subjective, ineffable, qualitative, phenomenological aspects of conscious experiences, e.g. the redness of red when you see red.... that sort of thing is just not easy to model with an usual reverse engineering approach. when we write programs to do things like humans, we look through the lines of codes, where does it ever say that red has to look a certain qualitative way? why does it have to feel anything at all? it's not clear if there is something it is like to be the program. why isn't color just a wavelength, that is just different from the others? why does it have to feel this very specific way? if there is such a thing as subjective experiences for a program, to be represented by some numbers, the program will work just fine if we had swapped these numbers for red vs green. so long as such 'labels' are consitent, the program will work just fine. but our subjective experiences don't seem to work that way. and because it is so hard to pin down what may be the mechanisms / basis, some radical solutions like panpsychism are considered live possibilities by serious philosophers - maybe qualia is a fundamental property of physical stuff so we can't explain it in simpler mechanistic terms.
to some people, this problem about subjective qualia is a nonsensical problem. it's not even the kind of problem that scientists should be concerned with. to a certain extent, i sympathize. but at the same time, i think it is a legitimate thing - and maybe even important thing - for philosophers to ponder about. to some extent, their jobs are different in nature from ours. it's just good to keep the two businesses separate (in terms of evaluating what's right within each field).
but when philosophers working on these issues start to pretend that certain scientific theories support their worldview (e.g. panpsychism), then we get into trouble. as it turns out, the science itself doesn't support their views. it's just that some scientists endorse their views. but there's a world of difference between an empirically supported scientific theory, vs a theory endorsed by empirical scientists - the latter does not need to be a scientific theory at all. when philosophers cite such poor evidence as supporting their view, i fear it cheapens their philosophy, and they are asking for the backfiring.
so, all good. no need to worry about zombies for now (i.e. roughly creatures who functionally behave like us but have no subjective qualitative experiences). let's assume they don't exists - which is my tentative stance in our recent Science paper by the way, that qualia empirically correlates with certain neural computation in humans, so we should assume they do as such for now.
but there's another worry, from the opposite end. when we try to do this as a Hard Science, do we end up studying consciousness at all? or, are we just studying good old perception or attention, but we call it conscious perception just to sound sexy and cool?
this is the question brought up in a great post by @neurograce, which i find really thoughtful and fair. in truth, that's something i worry about a lot too. i thought i was to reply more directly onto @neurograce's blog, but i think the discussion on twitter more or less took care of it, with some useful input from Ken Miller too, and @neurograce kindly reflected it all on her blog - which i highly recommend.
in essence, the answer is: yes there is a meaningful work to be done, even if we aren't concerned with qualia and zombies and all that. it is just a basic neurobiological question why some processes in the brain are conscious, in the sense that we can talk/think about them, and why some processes are not. a science of the mind is incomplete if we can't say what makes the difference. however, the danger is that we need to make sure when we are talking about unconscious processes, we aren't just talking about feeble, weak, processes. for otherwise, we would just be equating consciousness with stregnth of perception. and in that case we can just talk about perception and do without the loaded c-word. there is something more to it than just strong perception though; there are very powerful forms of unconscious perception, as in the neurological phenomenon of blindsight. conscious perception just seems to be a different sort of process. mapping out the difference is meaningful work. we don't have a perfect solution as to how to do this yet. there's not yet a consensus; it's ongoing, so @neurograce's skepticism & critiques are very much welcome.
we can likewise frame this as a challenge to AI researchers: can we characterize different forms of processing, each of which somewhat similarly powerful, but some allows the system to reflect upon and report of them, and some are more opague to such introspection? like Yoshua Bengio (see this), i do think we may be getting there.... if we are careful not to confound it with other psychological phenomena such as attention, language, depth of processing etc. that is, we really need to make sure we are honing in on the critical mechanisms truly necessary and sufficient to make the difference between the conscious vs unconscious.
as the work becomes more rigorous, the concepts become better defined in cognitive/computational terms, can we just bypass the historical baggage, and avoid the c-word altogether? i think we shoulnd't, because there are already theories of consciousness that are explicitly as such, and some of them can be meaningfully arbitrated. it is odd for those doing this work to pretend we are not studying consciousness per se.
but above all, i also feel we can't sidestep it & pretend there isn't such a problem in the first place. we owe it to the rest of the field to fix this mess. people are going to talk about consciouenss and related issues. as we see in this recent debate between experts of the fear circuit like Michael Fanselow and Joe Ledoux (click the links to see their respective arguments), these are genuine problems, with real clinical and practical implications. between worrying about the metaphysical, lofty hard problems, vs going vanilla to avoid being too controversial, i fear we have not really done our jobs. amid all the pop media noise, we made it look like there are no serious scientific answers to these basic questions. it is time to do our parts.
Thank you for posting this, and for your blog in general. I’ve found it very interesting and helpful for getting a better understanding of your field.
ReplyDeleteI thought that this criticism from @neurograce was very compelling:
“My issue with this distinction is that I believe it implicitly places verbal self-report on a pedestal, as the one true readout of perceptual experience. In my opinion, the verbal self-report… is merely a subset of the ways in which one would measure perceptual awareness.”
If this is right, then priming effects, forced choice questions, and verbal report like “I see an apple” would be different ways of measuring perceptual awareness. The verbal report is not a marker of subjective experience, but any action (including report) that is influenced by the presentation of the stimulus is a marker of some kind of perceptual awareness. If this is so, it isn’t clear what would be gained by using the term “consciousness” to describe the perceptual awareness as “subjective awareness” in the case of report, but not in the case of other markers of perceptual awareness.
Here, you reply: “It is just a basic neurobiological question why some processes in the brain are conscious, in the sense that we can talk/think about them, and why some processes are not.”
I would argue that verbal report such as “I see an apple” (or a yes or no response to the question “did you see an apple?”, or a report like “I saw an apple with 4/5 confidence”) is not a privileged sign of being able talk/think about certain brain processes (or perceptual events, or first-order perceptions). In a case of blindsight, in which a subject confronted with a moving apple moves to catch it despite reporting that “I did not see the apple,” the subject’s experience might still be distinguishable (for them) from a similar experience in which there was no moving apple. The subject may have perceived a motion, or felt a solicitation to move— their subjective perception was different than it would have been if there were no apple. It is different from a healthy individual’s subjective perception of an apple in a similar circumstance—the healthy individual might see the color of the apple, and see the apple as an object—but the individual who “sees the apple” with blindsight still undergoes a change in subjective experience, the experimental marker of which is their movement to catch the apple. Rather than a “powerful form of unconscious perception,” a person seeing an apple with blindsight might experience a different form of subjective perceptual awareness. (Is this consistent with first-person accounts of blindsight that you are familiar with?) Similarly, a healthy individual who is has an experience of seeing an apple with blindsight in experimental conditions (say, they are presented the apple after a mask, in an experiment like the one in your 2006 paper) might report not having seen an apple, despite guessing that an apple was presented or making a guess about some feature of the apple at above chance. When the subject’s perception of the apple is very similar to their perception of no apple, and they only can make discriminations about their perception at slightly above chance, it might be difficult for a subject to discern that their was a change in their subjective experience when they are asked to report. Following your “criterion problem,” the subject might have experienced a slight difference in subjective experience, but chose to describe it as “not seeing an apple” or “seeing an apple with x/y confidence”, where another subject might have reported differently. I would think that they might have felt a very experienced a very slight difference in subjective perception that is difficult to put into words, but that they might be able to describe if pressed. (Although, I haven’t asked them, or done this experiment myself. Does this seem like a real possibility to you?)
...
If I am asked to say whether I saw an apple, and feel inclined to say “I guess so”, I would take this as a sign or result of undergoing a subjective experience that is slightly different from the subjective experience that would lead to feeling inclined to say “I definitely saw an apple” or “I definitely saw no apple.” After reflecting carefully, I could think and talk about this indeterminate experience. If I see a hazy object that might be a person or a coat on a hanger down the hallway, I can describe the experience as a subjective perception of a maybe-hat, maybe-coat, which is experienced with a positive indeterminacy. (This is Husserl’s example). I can think and talk about this perceptual experience, despite it not being an perception of a person, or a perception of a coat. Similarly, I can talk and think about the experience of seeing a maybe-apple, maybe-blackness. (This maybe-apple might amount to a brain process that represents a determinate apple and several other possibilities with certain weight, or whatever the correlate of seeing a maybe-apple would be).
DeleteAnother reply to @neurograce’s criticism might be that there is a real distinction between perception that is unconscious and perception that is conscious in the sense that conscious perception amounts to subjective, qualitative experience, which there is something it is like to undergo. Putting verbal report on a pedestal would be justified if it was a marker of conscious perception in this sense. (Is your term “subjective perception” intended to capture this distinction, between states that are non-subjective or non-phenomenal, and states that are?)
But, I think an account in which verbal report and forced-choice behavior are two of many ways to express subjective experience does better justice to phenomenology that does an account in which there should only be subjective experience if there is (or could be) verbal report. As I described above, in subjective experience which has positive indeterminacy, such as seeing a maybe-apple/maybe-blackness, a subject might not go on to report their experience as such (they might say “I saw no apple,” or “I don’t think I saw an apple”), despite having undergone a change in subjective experience. Following your example from the binocular rivalry paper, the subject who sees the apple with blindsight might say “I guess… I see an apple.” This indefinite report is a sign of an indeterminate but distinct subjective perceptual experience. One subjective perceptual experience might induce me to report that “I see an apple,” another might induce me to say “I guess… I see an apple.” My perception of a priming stimulus might be characterized by a subtle inclination towards acting in a certain way, that might be difficult to describe, and more often which I would not pick up on at all. To give a fantastical example, if I developed to be incapable of describing my perception in words or in thought, I could still undergo subjective perceptual experiences of apples, express these experiences by reaching out to eat them. In this account, however a perception leads me to respond, my responding is a sign that there was a change to my subjective world. Does this account sound plausible?