Thursday, May 28, 2020

Open letter to NIH on Neuroethics Roadmap (BRAIN initiative) 2019

a while back we sent the below letter. some people ask if they could refer to it. so i thought i could paste it here, so they can link to it. it was an open letter anyway

***

May 14, 2019

Dear colleagues,

Re: Neuroethics needs a balance between theory development and rigorous experimental research on consciousness

We write in response to the call for comments on the Neuroethics Roadmap, which is part of the NIH BRAIN Initiative. We are pleased to see that research on consciousness is receiving recognition. As a group of active researchers in the relevant fields, we hope to point out some potential caveats.

The Roadmap emphasizes the need for theoretical and mathematical models of consciousness. However, current theories are tentative and limited. To make progress, we need experiments designed to identify the neural mechanisms distinguishing conscious from unconscious processes in humans, in whom consciousness can be assessed via subjective reports.

We believe that active research into the neural signatures of consciousness in these relatively clear cases is crucial for building and testing models of awareness in non-human primates and simpler animals, or other more controversial cases. For this reason, for example, it is too early to view the Cambridge Declaration on Consciousness, which states that some non-human animals without a neocortex are conscious, as reflecting a scientific consensus.

Relatedly, the Integrated Information Theory (IIT) of consciousness features prominently in the references cited in the Neuroethics Moonshot section of the Roadmap. While there is support for the association between consciousness and the complexity of active brain networks in humans, some have taken this relationship to generalize lawfully to all physical systems. We do not consider this interpretation to be scientifically established or testable at the moment. We note that alternative theoretical approaches are already referenced indirectly in the Moonshot section (Reference 8); other useful reviews are also available.

Unfortunately, despite their clear relevance to various areas of brain and mental health research, empirical projects focusing on the brain mechanisms distinguishing between conscious and unconscious processes in awake individuals currently do not receive adequate funding. To test theories, we need the relevant data. As many as 58 authors in the field have recently expressed related concerns in a peer-reviewed statement.

Finally, we emphasize that many potential stakeholders could contribute to the Subgroup’s discussions. For example, the Association for the Scientific Study of Consciousness (ASSC) is an open academic society specifically dedicated to scientific research on consciousness. It could have an important dialogue with the Subgroup, and with others interested in the topic.

Thank you for your attention. We have elected to make this letter open, as others may benefit from these clarifications on the current state of research in this area.

Best regards,

(in alphabetical order)

Michele A. Basso, UCLA
Diane M. Beck, University of Illinois
James Bisley, UCLA
Ned Block, NYU
Richard Brown, Laguardia College New York
Denise Cai, Mount Sinai Icahn School of Medicine
David Carmel, Victoria University of Wellington
Axel Cleeremans, Université Libre de Bruxelles
Stanislas Dehaene, College de France
Stephen Fleming, University College London
Chris Frith, University College London
Simon van Gaal, University of Amsterdam
Michael E. Goldberg, Columbia University
Mel Goodale, Western University
Patrick Haggard, University College London
Biyu He, NYU
Sid Kouider, Ecole Normale Superieure, Paris
Robert T. Knight, UC Berkeley
Konrad Kording, UPenn
Hakwan Lau, UCLA
Dominique Lamy, Tel Aviv University
Joseph LeDoux, NYU
Stephen Macknik, SUNY Downstate Medical Center
Susana Martinez-Conde, SUNY Downstate Medical Center
Matthias Michel, Sorbonne University
Lisa Miracchi, University of Pennsylvania
Earl K. Miller, MIT
Lionel Naccache, Sorbonne University, ICM
Adrian M. Owen, Western University
Richard E. Passingham, University of Oxford
Elizabeth Phelps, Harvard University
Megan A. K. Peters, UC Riverside
Dario Ringach, UCLA
Tony Ro, Graduate Center, City University of New York
David Rosenthal, City University of New York
Jérôme Sackur, École des Hautes Études en Sciences Sociales
Yuka Sasaki, Brown University
Claire Sergent, Université de Paris
Anil Seth, University of Sussex
Michael Shadlen, Columbia University
Jacobo Diego Sitt, INSERM-ICM
Catherine Tallon-Baudry, INSERM, PSL Research University
Frank Tong, Vanderbilt University
Peter Ulric Tse, Dartmouth
Takeo Watanabe, Brown University
Thalia Wheatley, Dartmouth

Thursday, May 14, 2020

why i am not a biopsychist

my last blogpost was really meant to be just for fun. but i do mean it for some of the ideas defended, specifically that there are only qualia as relationally understood, wrt to other experiences a person can have. this is to say, there are really no qualia as intrinsic properties. essentially, this means i’ll take something close to what is called a Frege-Schlick view, but will extend and defend it to make it less restrictive. so under some circumstances we can compare experiences across people after all, with the caveat that we can never be completely certain about such comparisons. this will allow us to resist a whole array of challenges to functionalism, e.g. inverted spectrum, zombies, etc. expect a more serious post / draft-y paper in about a month….

***

meanwhile, let’s talk about a different issue, re: how an important debate within the field is taking shape. it was one of those rare moments that something useful philosophically has come out of twitter of all places…

in a tweet i pointed out that panpsychism isn’t really taken seriously by scientists, as some may get that illusion by reading online stuff these days. Victor Lamme challenged me to do a poll. that led to some discussions. turns out, Victor isn’t a panpsychist. surprise, surprise. and in fact, i’m not sure who really is, within the scientific community…. apart from a few *really* far out folks.

but Victor isn’t a functionalist like i am either. by functionalist i don’t restrict myself to specific versions of it; i definitely do NOT think consciousness supervenes on ‘long-arm’ functional properties only. basically, i include for consciousness anything that can be done using software / algorithms. so any computational, representational properties may be relevant. i just don’t think the specific substrate for implementing the software is ultimately that crucial / irreplaceable.

so Victor decided to call himself a biopsychist, i.e. he believes that for a creature to be conscious, the relevant substrate needs to be biological. in fact, he thinks that most if not all living organisms are conscious!

i thought it’s a new name, biopsychism. but very quickly i stood corrected. as Evan Thompson pointed out, the term at least traces back to Ernst Haeckl in 1892.

i really like this way of setting up the debate: biopsychism vs functionalism (broadly construed, as i described above). this really gets at the heart of the issue, a sensible divide within the field, with very legit people on both sides. none of that far out / strawman stuff.

happily Ned Block seems to approve of the term. of coz he’s a biopsychist *of some sort*. he has long argued against functionalism and representationalism. something about the substrate is what really enables conscious experiences. does he think all biological organisms are conscious? probably not. so we should distinguish between some different versions of biopsychism. below i’ll do so, and also highlights some problems i see, even with rather weak versions of biopychism. despite these misgivings, unlike my stance on panpsychism as a scientist, i think biopsychism a position worth taking seriously.

***

the way Evan Thompson defines it, biopsychism states that all and only all living organisms are conscious. that is pretty strong. we can call this strong biopsychism or something. i think it’s quite unlikely to be true. (Evan gave a paper at PSA a couple of years ago, you can email him to ask for a copy; Peter Godfrey-Smith has also written on something related here)

the reason is, i think it’s pretty clear that even simple perceptual experiences involve fairly complicated computational processes that may critically depend on areas of the brain that are matured late in development and evolution, e.g. areas in the prefrontal cortex. a very simple living organism is not gonna have that.

but my take on the empirical matter aside, there is also a pretty damning conceptual problem. so let’s say you’re seeing two different images, a cat and a monkey, in binocular rivalry. when you are consciously seeing a cat, your cat-representing neurons fire. your monkey-representing neurons fire relatively little, as you are not conscious of the image of the monkey. but now, aren’t these monkey-representing neurons biological and alive?

this is related to what is called the 'combination problem', which is something that panpsychists also have to deal with. ultimately, a strong biopsychist will have to say something like, ok, so when you are not consciously seeing the monkey image, the monkey-representing neurons are not signaling consciousness for you. but they are themselves conscious. you just don’t feel their consciousness even though they are in your head.

this leads a rather hilarious way to make Ian Philips happy, i guess: there really is no unconscious perception, as Ian has argued. unconscious perceptual processes are conscious after all, but only becoz everything in the brain is conscious, with or without you~ this is a scenario rather different from the one Ian has argued for, of coz. he’s a reasonable guy. this, on the other hand, seems….. rather weird.

***

so some may hold a weaker version of biopsychism, and say, not all biological organisms are conscious. but if a creature is conscious, it must be biological. the relevant substrate can’t be replaced by something non-biological and yet functionally similar. if you replace it, the subjective experience will be gone, even if the subject behaves somewhat similarly. Ned is likely a biopsychist of this sort.

i am not sure about even this weaker version. because in biology, we look for mechanisms. not magical substrates. let’s say people found that consciousness requires a certain pattern of activity, involving some particular type of neuron, with certain transmitter receptors. then the scientific question to ask is what does it *do*. to the extent we figure out what does it do, why can’t we write down the computational algorithm that would mimic exactly what it does? then we should be able to replace it with something exactly functionally equivalent. if it does the same thing, exactly, and yet consciousness is missing... this just sounds like magic. and how are you ever gonna know?

but Bryce Heubner and Evan convinced me that there may be something to it. the idea is, yeah sure you can try to artificially mimic a biological mechanism. but the mechanism may be so inherently biological, that it involves implementing it in the right bio-habitat, letting it ‘survive’ on its own, do its metabolic work, etc. in that sense, yes, you may try to mimic it, but by the time you succeed…. maybe it isn’t so crazy to say that the artificial replacement is basically just as alive and biological.

i’m still not totally sure, but it’s true that in the old days, we talked about multiple realizability as if it is commonplace. given the same function, we can implement it any way we want. but increasingly, i think people do recognize that multiple realizability is not as common as we thought. often a mechanism can really only be implemented exactly and most efficiently in just one way.

***

so i’m still not sold, but i think it would make an interesting debate. and this may help the field move away from all those distractions we’ve seen in recent years too!

thinking ahead…. i imagine what kind of biopsychic i would become, if i were to eventually come round to it. i suspect the requirement for higher ‘cognitive‘ functions, e.g. those in the prefrontal cortex, is unlikely to give. if this turns out to be empirically wrong, that’s that. but i’m fairly sure for now. if anything, i may come to accept that: if we are to implement those higher functions as the way they work exactly, in conscious creatures like ourselves, yeah, maybe you’d end up having to do something kinda biological. perhaps we can call a person holding such a position a high functioning biopsychist. i’m not one yet, but i got the feeling that my good friend and co-author Richard Brown may be one!