Sunday, August 9, 2020

on being taken seriously

yesterday i posted this on twitter. didn't think of doing it here coz it's a rather simple point. basically i think some panpsychists have been making claims that i find less than intellectually honest, and i have made a complaint to SEP like i said i would. SEP responded promptly and made the author(s) change the text, so that's that.

the claim concerns whether panpsychism is taken seriously by neuroscientists these days. to my mind the obvious answer is no. but there is a tricky sense in which if you can find a couple of neuroscientists who support the view, the positive claim is logically satisfied

this was exactly what Dave Chalmers said on FB, when i brought up the issue in public first. yesterday i referred to the exchange as 'low rhetorics'. but i was promptly told that Dave was actually also part of the editorial process that lead to the change of the text. so he probably didn't intend for it to be a very serious or strong defense. i made a note of clarification and apologies on twitter then.

that said, i didn't know it back then. the point was something said in public. and others e.g. Neil Levy joined in to defend it. so i went ahead and shot SEP the email. just as expected, there wasn't much argument about the case.

so there's little to be surly about. but there's a small part of the argument that is perhaps interesting. so Dave's point was if X is taken seriously by a few members of Y, it is *logically* fine to say X is taken seriously by Ys. but many things are logical to say, yet silly, e.g. 'if 2+2=9, then I am a better philosopher than Dave Chalmers'. certainly for a place like SEP, we'd want the content to be not just logical but also non-silly.

in particular i suggested that it may make sense for one to at least restrict the statement to cases when X is taken seriously by 50% of members of Y or more. Dave and others suggested maybe that's too high a cutoff. perhaps 20% would pass the mark for non-silliness (or something like that).

but i'm not sure. let's say 20% of members of Y take X seriously. so we say 'X is taken seriously by Y'. but it also means 80% of Ys do NOT take X seriously. so certainly, it is just as logical to say 'X is not taken seriously by Y'. so we should allow people to say 'X is taken seriously by Y, and X is also not taken seriously by Y'. or we can say that 'X is taken seriously by Y; the negation of this statement is also true'. or: 'it is both true and untrue that X is taken seriously by Y'.

that's just ..... silly.

but anyway, silliness aside, i don't really feel so strongly about it; mostly just brought up the above for fun. in large part becoz i think it is already a lost cause. some authors will find other rhetorics to promote the view, as they already do. an easy way would be so say the view is endorsed by some 'leading neuroscientist'. again, leading is a pretty subjective thing. there is a sense in which e.g. Christof Koch is a 'leading' neuroscientist, just as Christof likes to unilaterally profess that his preferred theory is 'dominant', 'leading', etc. there's not much to do about it other than to shrug. i too think those old Crick and Koch papers are important and positively influential. but i am not sure many of us think that Christof these days still represents the field we're in. so the situation may be a bit like citing Eccles to say that some of the most accomplished neuroscientists (Nobel laureate no less) were dualists.

yes, yes, you can say that. there are only so many complaint emails i would care to write, and not all editors are reasonable. but why would you want to do that? if you have to cheat this much to promote your view it probably just isn't worth promoting or having, is it?

ok enough about sociology. i may eventually write a post about other problems of panpsychism too. most of us don't like it for scientific reasons, but over the past months i did dig into the philosophy a bit. but i must say, there i'm not so impressed either.....

ps - someone suggested that there may be a difference in standards between science and philo re: what count as being taken seriously. i think that's right, philosophers are meant to consider far-fetched ideas more carefully; that's part of their job. i ran some twitter polls which seem to confirm that too. but if that's so, they should still be careful in saying that certain *scientific* discipline takes their ideas seriously. that's not how it works for us. if we feel misrepresented, it will just get harder for us scientists to take them seriously. and this may be hurting not just panpsychism but philosophy the discipline as a whole. 

pps - this version is updated on 2024 May 9. a previous version implied that one of the authors Philip Goff didn't respond to me promotly enough, which led to my writing directly to the editors. but how it went might have more to do with my own impatience. at some point he said he needed to consult with Dave Chalmers, but i felt that the main SEP editors are somewhat responsible too so writing to them directly was hopefully not inappropriate.

Saturday, August 8, 2020

zombies

Zombies are hypothetical creatures which are functionally identical to us, but they lack qualia altogether. If they are possible, functionalism can’t be right. If such possibilities are conceivable, perhaps it means that functionalism is at least conceivably wrong.

There are a few senses in which zombies can be functionally identical to us. Suppose we merely mean they can refer to and act on the same objects in the world like we do. Then sure some robot-like creatures can interact with objects in the world without being conscious.

But if we mean that they are functionally fully identical to us, as in the whole mental algorithm is consistent with what we have, then we should recognize that they can’t be coherently conceived to be nonconscious. Such ‘zombies’ will know what it is like to have certain experiences. They can tell you scarlet is a bit like crimson, not much like pink or purple, etc, from a subjective point of view. This means, if you ask them if there is something it is like to see colors, they will truthfully say yes, for the same reasons we do. On the other hand, if you ask them what it is like to experience some subthreshold firing of neurons in the visual cortex, it should draw a blank. And this would not be merely because they don’t know enough about neuroscience. Even if you actually stimulate their visual cortices that way, they still wouldn’t know what you were talking about. In this sense, they are just like us.

Of course, the anti-functionalist can say that they only behave or talk as if they consciously see. But how else do you know if anyone is conscious, besides their expressing that they think so and your lack of reason to doubt that they are lying or deluded? 

Let’s say we have some Martians, who truthfully agree with us that there is something it is like to see red and other colors. And yet, when asked what it is like to smell, it draws a blank. If odors could nevertheless influence their behavior, we probably would think that they have olfactory processing. In that case, what else apart from consciousness can account for the difference between their vision and olfaction? According to them, in one modality, there is something it is like, and in the other, there isn’t. If they aren’t lying or crazy, what else is the difference?

This is to say, if a creature is capable of thinking meaningfully whether there is something it is like to have a certain experience, then whether the creature is inclined to truthfully think so is the truth of the matter. As such, zombies in a fully functional sense are simply not coherently conceivable. Either they truthfully think there is something it is like to have some experiences, in which case they aren’t zombies; or they do not think so, in which case they are not functionally like us at all.

[this literature is vast and mine will never be the last words, of coz. hence this is a blogpost not a paper. my goal is just to hopefully convince some of my scientist colleagues that some functionalist defense is possible, and it needn't be overly complicated.]

Monday, August 3, 2020

Cognitive Neuroscience of Consciousness Trainee Recruitment Conference

Hi everyone,

In a previous post I discussed the tricky business of finding a suitable lab for grad school in this field. After some discussion with lab members and colleagues we are now organizing the following event:

- What is it about? 

The Cognitive Neuroscience of Consciousness Trainee Recruitment Conference 2020 (link to registration here) will feature 12 PIs in the field. Most of them will have short videos uploaded in the coming few weeks, to showcase some current projects in their labs. There will then be 3 panel sessions where registered attendees can ask questions and discuss with the PIs about the presented projects, working in their labs, career advice, grad school applications, etc.

- When are the meetings? Who are the PIs?

They are tentatively scheduled as follows:

Aug 26 Wed 11am (PST) will feature Rachel Denison (Boston University), Tony Ro (City University of New York), Jason Samaha (UC Santa Cruz), & Emily Ward (U Wisconsin-Madison)

Sep 1 Tues 11am (PST) will feature Eve Isham (U Arizona), Brian Odegaard (U Florida), Giancarlo Vanini (U Michigan), Caroline Robertson (Dartmouth) 
-- Update: Profs Isham & Robertson will not be putting up videos; please check their websites for their research: 

Sep 3 Thur 11am (PST) will feature Ruth Rosenholtz (MIT), Megan Peters (UC Irvine), Michael Cohen (Amherst), Phil Corlett (Yale)
-- Update: Prof Cohen won't be putting up a video; please check his research website @ http://www.michaelacohen.net/

We expect each meeting to last for about an hour.

- Where will the meetings take place?

These will be virtual meetings online. We plan to use Zoom.

- Who can attend?

The meetings are primarily for trainees, broadly defined to include prospective grad students, research assistants, and postdocs. But others are welcome too. All you need to do is to register here with a correct email address so as to receive further information for the Zoom meetings.

But please be reminded that the meetings will take a semi-synchronous format, that is we will not show the videos at the meeting. You should watch them prior to the meeting, during which we'll go straight to Q&A.

- Where do I find the videos?

Watch this space. Links to the videos / background materials will be added gradually in the coming weeks, as hyperlinks associated with the names of the PIs listed in the schedule above. For example, you can click on Ruth Rosenholtz's name for her video.

- Do I have to pay?

No, although you have to register to attend the meeting, it is free.

- Who are the organizers?

All the work has been done by Cody Cushing and Taylor Webb from my lab, together with our frequent collaborator Matthias Michel.

- Why are all the labs featured based in the US?

Sorry but this is the purpose and current scope of the meeting. The field faces unique challenges in US, due to various factors including funding structure and popular media misrepresentation. Perhaps in the future others will set up similar meetings for other regions too. For now, please forgive our limited logistical bandwidth.

- I have a question not covered above ....

The ideal place may be to ask me on twitter, so others can see the question and my response to it too.

Thursday, July 23, 2020

Mary & Maru

In Jackson’s knowledge argument, Mary is a talented color scientist, who was prevented from seeing color since birth. Despite that deprivation, she has studied all there is to know about color processing. When she finally sees something red for the first time, will she learn something new? The argument goes that she probably will, and it means that there is something to having a conscious experience that outstrips representational knowledge, and perhaps also functionalism as a result. 

Empirically, we know that sensory deprivation of this kind can severely limit normal brain development. After years of such deprivation, chances are her ability to see color may be permanently gone, or drastically changed somehow. To assume that she can see red exactly like we do is to force our imagination through some plausibly incoherent assumptions.

So instead, why don’t we consider a more realistic example that would serve the same purpose: Maru is a young child who has never tasted natto before. Genetically, she is a supertaster. Like many young children in her culture, she can cook simple meals for herself. She is deeply interested in food and culinary art. But her own parents dislike natto, which is not so uncommon in her culture. So she has never tried it. She heard that it has such a distinctive flavor that either she will love it or hate it. She has been told that natto is basically a kind of fermented soybean. So it is in a way like miso, although the flavor driven by the fementedness is a lot more intense. It is not necessarily as salty, but it is even funkier than old cheese, or Chinese fermented bean curd. In fact, it is on the level of Taiwanese stinky tofu, although for natto the intensity is more on the palette than on the nose. It has a gooey texture, like it is mixed with raw egg white or something. It’s basically ineffable what it is like, she’s been told. She has to try it to find out herself.

The inquisitive and imaginative Maru-chan asks a lot of questions about this curious food. She regularly thinks about what it would taste like. One morning, after a slumber party at her best friend’s, they have natto and rice for breakfast. So she finally gets to try it for the first time. Does she learn anything new?

Obviously, that depends on how Maru-chan’s ‘research’ has gone. But one possibility is, she may say: this is exactly how I always imagined it to be! Based on the other experiences that are familiar to her, with some remarkable level of imagination, she might have actually figured out what it would taste like. On reflection, hasn’t that also happened to us sometimes, for other stimuli? Experiencing something for the first time doesn’t always feel so surprising. So it is possible that she will learn nothing fundamentally new. Tasting natto just confirms what she already knows without first-hand conscious experience. In that case, conscious experiences don’t necessarily outstrip representational knowledge.

Alternatively, it could be that Maru does learn something significantly new. Natto isn’t quite like what she’s imagined it to be. From here, perhaps she gains the ability to imagine it correctly. She also learns some new self-knowledge: this is what natto tastes like to her. At a subpersonal level, some mechanisms in her brain acquire the new information that this is what the relevant sensory vehicle is like, for natto - it is a bit like the vehicles for this other taste, that other taste, nothing like the vehicles for the tastes of ikura, karaage, etc. This is not the sort of stuff one can learn by reading books. Even if one learns the information as a person, it doesn't mean the relevant mechanisms in the brain will get it too.

So in either case there’s no challenge to functionalism. 


P.S.  This story is inspired by my experience of once dining with an ardent anti-functionalist in a Japanese restaurant in downtown Manhattan. I tried to convince the adventurous philosopher not to order the natto dish, but I failed.

Wednesday, June 24, 2020

how (not) to apply to a phD program in psychology / neuroscience (to work on consciousness)?

it’s that time of the year again. students are starting to think about applying for grad school. over the years i find myself giving some of the same advice to people, and thought i might as well write them down for open sharing. this is mostly from a US perspective, but some of these apply elsewhere too

(why) should one go to grad school?

it shouldn’t be like this, and it is on people like me to fix the system asap … BUT the current reality is grad school pays people very little (~US$30k a year, varies from places to places), and does not offer very good career opportunities after. in the old days, if you’re in a good phD program, chances are you’ll become a professor one day, if you work hard towards that goal. professors aren’t paid super well either, but pay isn’t everything. we have extremely rewarding and meaningful jobs; tenure is a nice guarantee of security & freedom, and an undeniable privilege.

but the old days are gone. you should look up the phD -to-tenure-track (faculty position) ratio yourself. it doesn’t look so good. they say with a phD there are also non-academic career opportunities. that’s correct. but many people also have doubts about whether the years of training are really worth it. again, it is on people like myself to make sure we equip trainees with more transferable skills. but one thing to consider is the sheer opportunity cost of being in grad school for a good half a decade. it can be fun, but it can also be stressful. but above all, you will not be getting the industry training and wages during those years. you get to have your phD, but in sense, you’ll also be missing a head start, if industry is ultimately your destiny. a few years’ head start during the prime of your years is a lot.

the issue is complicated and i don’t draw any firm conclusions here. you have to figure it out yourself. i’m only suggesting you think this through carefully. ask people for advice, but understand too that there may be a bias: if they are already successful in academia there may be a sampling bias re: how good their experiences have been. or even if they aren’t having such a good time, there’s a psychological factor that people tend to avoid being too discouraging and negative. especially if they feel it is their jobs to defend and to promote their discipline, rather than to badmouth it. this kind of sense of loyalty to one’s discipline is understandably common. so, one needs to give it some thoughts re: how to get the hard facts. ask the questions more directly, if you’re unsure. that tends to help.

despite these caveats, i think there is at least one very good reason to apply to grad school: if this is your calling, perhaps you just have to. sometimes in life we choose to do things knowing full well it is hard. maybe the odds are stacked against us. we know we may not be compensated generously, or even fairly. but we just can’t see ourselves doing anything else. or better still - we thought through the other options, and even if they pay better, are more secure, etc, they just aren’t nearly as appealing. that, to me, is one good reason to go ahead and take the plunge. of course, you may or may not agree with this. the above is more like life advice rather than professional advice.

what i said here will probably be misunderstood by some. perhaps it can’t be helped, but let me re-emphasize: i am not trying to normalize the situation, to say that people should be prepared to sacrifice. i totally believe that the system should be kinder to them. just like jazz musicians shouldn’t be expected to starve in order to play great music. i agree, and i am fully aware it is on people like myself to fix this problem for academia. but i also feel that, meanwhile, it’s my responsibility to tell people the truth re: what to expect for now.

how to choose a suitable school & PI?

the first thing is: start thinking about this early. like, ideally, a year or months before your application. ask your academic mentors for advice. multiple of them. cold call people if necessary - many people are actually willing to give free advice to people in your position. they understand this can be a confusing stage to be at. they’ve been there.

the reason why you really need to absolutely think this through, re: whom to work with, is that it will matter a lot for your grad school experience and also your career prospects after. often this factor (PI, i.e. your primary advisor / head of the lab) is much more important than the general prestige of the school / program. unfortunately, some PIs aren’t as on top of things as others. some may be better at placing their students in desirable jobs than others. some may give you more freedom. some may give you so much freedom that there isn’t enough guidance for you - so it sometimes is a matter of style and match too. however, very unfortunately, unprofessional, exploitative PIs do exist, even in the best phD programs. the tenure system means it is not so easy to get rid of them….

... which leads to the interesting consideration of the trade-off between working with someone junior or senior. senior people have a proven track record, and sometimes can offer you better career and research opportunities. junior (i.e. pre-tenured) PIs may lack such experience. but they may be ‘hungrier’. they need to succeed themselves, and thereby they also need you to succeed somewhat. if things don’t go well, it hurts their careers too.

but if the above analysis is right, of course the worst scenario would be a senior PI who isn’t so great to work with. they have tenure, and little to lose. they may have decades of experience dishing out not-so-great experiences to students. and if they made it this far, they gotta be good at (hiding) it. you probably wouldn’t stand a chance against them. (sorry to have to put it in such a scary way!)

so write to the PI. meet and talk with them if they are willing. talk to others about the PI’s reputation. how is their work received by their peers (who will in turn evaluate your own work in the future)? talk to their alumni. what was it like to work with the PI? sometimes people are more willing to tell you things truthfully on the phone rather than through emails. this may sound surprising if not utterly messed up: but people are often worried about retaliation for whistleblowing. of coz, don’t take any one-sided gossips at face value - but when you talk to enough people you should get the overall picture. this can take some work. so do all this well before the interview. i mean, well before you even apply. start now!! just email some people already.

i cannot stress enough how important this is. getting into the wrong lab can be a really, really nasty experience. you can usually get out of the lab (with some considerable hassle), coz the program should protect you. this is one way where it matters who else is on the faculty in the program. so you aren’t choosing just the lab, unless you can be 100% sure you will stay with that lab. (but mind you, great PIs sometimes get job offers and move to different institutions too. sometimes it can happen during your phD. moving along with them can be a slightly tricky business.) but in most cases, if you join a suitable lab in the first instance, you’d be fine. it tends to work out that way.

how to choose a good lab for consciousness?

this is the insanely tricky part. the first thing to realize is, in the US, consciousness remains a ‘taboo’ topic - in a very specific sense: mainstream federal funding for this kind of research is generally lacking. this means you will see a lot of promotion of this type of work in popular books, mass media, internet, etc., maybe even some high profile journals, some theoretical review papers. probably way too much. but actually, there are not really that many well-funded labs doing empirical work on the topic that is respected or taken so seriously by other scientists (see e.g. this, this, or this). and when you do a phD in cog neuro, typically it’s the empirical work that matters. for some solid 'good' labs, even if their research looks somewhat relevant, when you tell them you want to do a phD on consciousness with them, they may think you're being silly

private funding ensures that some of this work continues to be done somehow, but from the top-tier research universities’ administrative perspective, this kind of money is not nearly as good as federal funding (e.g. via NIH / NSF). there’s the prestige factor, but there are also economic reasons. accordingly, there are fewer jobs created in these places for consciousness.

there are two consequences. the first is that you should know what you’re getting into. in terms of intellectual stimulation and challenge, this may be the most fun topic out there. but in terms of career prospects, this may mean there are extra hurdles still. the long and short is, it can be done. many people including myself work on this and manage to not get fired somehow~ if you get in, we’ll share with you our experiences and help you out. but i feel i need to warn you about the reality early on.

another issue is how to choose a lab to start with. lack of mainstream federal funds for this kind of work also means the work becomes relatively unconstrained by the conventional peer review mechanisms. with private funding, sometimes anything goes. together with various socio-historical factors, things can go pretty wild, and at times unhealthily sectarian. so, in addition to all the caveats mentioned in the last section, there is an additional factor for consciousness: if you inadvertently joined a lab where the work is not considered so favorably by other academic colleagues… it may be a situation rather difficult to get out of. depending on the PIs’ style, it may not be in your interest to aggressively challenge their views too much. good PIs should value critics, but, unfortunately some are better than others at this. and if you don’t confront them as much, you may end up doing what they want you to do - then others may see you as being hopelessly indoctrinated into something rather unscientific, due to your own lack of critical thinking, etc. and these people could include your potential future employers too. so this can truly be a career disaster...

so, all i can say is, when it comes to consciousness, handle it with care, seriously. doing a phD, even if it isn’t a sacrifice, is a major life decision. the advice in the last section applies. but for consciousness, don’t just read pop sci / news articles and go with what they say. this is true for most fields, but for our field specifically, the correlation between media prominence and actual scientific quality is often strangely negative. so by all means, do your own research early on. ask as many active researchers in the field as you can. get a diverse set of opinions. we are here to help.

hope you don’t get too discouraged by what i say. if done right, doing a phD in consciousness can be the most wonderful experience too. i say this from my own experience, with all my heart. best of luck!

Hakwan Lau

June 24, 2020


ps - since writing this my lab has decided to do something to help prospective grad applicants to find a suitable lab to work on consciousness in the US. follow me on twitter @hakwanlau. we’ll have something to announce in a few weeks.

Thursday, May 28, 2020

Open letter to NIH on Neuroethics Roadmap (BRAIN initiative) 2019

a while back we sent the below letter. some people ask if they could refer to it. so i thought i could paste it here, so they can link to it. it was an open letter anyway

***

May 14, 2019

Dear colleagues,

Re: Neuroethics needs a balance between theory development and rigorous experimental research on consciousness

We write in response to the call for comments on the Neuroethics Roadmap, which is part of the NIH BRAIN Initiative. We are pleased to see that research on consciousness is receiving recognition. As a group of active researchers in the relevant fields, we hope to point out some potential caveats.

The Roadmap emphasizes the need for theoretical and mathematical models of consciousness. However, current theories are tentative and limited. To make progress, we need experiments designed to identify the neural mechanisms distinguishing conscious from unconscious processes in humans, in whom consciousness can be assessed via subjective reports.

We believe that active research into the neural signatures of consciousness in these relatively clear cases is crucial for building and testing models of awareness in non-human primates and simpler animals, or other more controversial cases. For this reason, for example, it is too early to view the Cambridge Declaration on Consciousness, which states that some non-human animals without a neocortex are conscious, as reflecting a scientific consensus.

Relatedly, the Integrated Information Theory (IIT) of consciousness features prominently in the references cited in the Neuroethics Moonshot section of the Roadmap. While there is support for the association between consciousness and the complexity of active brain networks in humans, some have taken this relationship to generalize lawfully to all physical systems. We do not consider this interpretation to be scientifically established or testable at the moment. We note that alternative theoretical approaches are already referenced indirectly in the Moonshot section (Reference 8); other useful reviews are also available.

Unfortunately, despite their clear relevance to various areas of brain and mental health research, empirical projects focusing on the brain mechanisms distinguishing between conscious and unconscious processes in awake individuals currently do not receive adequate funding. To test theories, we need the relevant data. As many as 58 authors in the field have recently expressed related concerns in a peer-reviewed statement.

Finally, we emphasize that many potential stakeholders could contribute to the Subgroup’s discussions. For example, the Association for the Scientific Study of Consciousness (ASSC) is an open academic society specifically dedicated to scientific research on consciousness. It could have an important dialogue with the Subgroup, and with others interested in the topic.

Thank you for your attention. We have elected to make this letter open, as others may benefit from these clarifications on the current state of research in this area.

Best regards,

(in alphabetical order)

Michele A. Basso, UCLA
Diane M. Beck, University of Illinois
James Bisley, UCLA
Ned Block, NYU
Richard Brown, Laguardia College New York
Denise Cai, Mount Sinai Icahn School of Medicine
David Carmel, Victoria University of Wellington
Axel Cleeremans, Université Libre de Bruxelles
Stanislas Dehaene, College de France
Stephen Fleming, University College London
Chris Frith, University College London
Simon van Gaal, University of Amsterdam
Michael E. Goldberg, Columbia University
Mel Goodale, Western University
Patrick Haggard, University College London
Biyu He, NYU
Sid Kouider, Ecole Normale Superieure, Paris
Robert T. Knight, UC Berkeley
Konrad Kording, UPenn
Hakwan Lau, UCLA
Dominique Lamy, Tel Aviv University
Joseph LeDoux, NYU
Stephen Macknik, SUNY Downstate Medical Center
Susana Martinez-Conde, SUNY Downstate Medical Center
Matthias Michel, Sorbonne University
Lisa Miracchi, University of Pennsylvania
Earl K. Miller, MIT
Lionel Naccache, Sorbonne University, ICM
Adrian M. Owen, Western University
Richard E. Passingham, University of Oxford
Elizabeth Phelps, Harvard University
Megan A. K. Peters, UC Riverside
Dario Ringach, UCLA
Tony Ro, Graduate Center, City University of New York
David Rosenthal, City University of New York
Jérôme Sackur, École des Hautes Études en Sciences Sociales
Yuka Sasaki, Brown University
Claire Sergent, Université de Paris
Anil Seth, University of Sussex
Michael Shadlen, Columbia University
Jacobo Diego Sitt, INSERM-ICM
Catherine Tallon-Baudry, INSERM, PSL Research University
Frank Tong, Vanderbilt University
Peter Ulric Tse, Dartmouth
Takeo Watanabe, Brown University
Thalia Wheatley, Dartmouth

Thursday, May 14, 2020

why i am not a biopsychist

my last blogpost was really meant to be just for fun. but i do mean it for some of the ideas defended, specifically that there are only qualia as relationally understood, wrt to other experiences a person can have. this is to say, there are really no qualia as intrinsic properties. essentially, this means i’ll take something close to what is called a Frege-Schlick view, but will extend and defend it to make it less restrictive. so under some circumstances we can compare experiences across people after all, with the caveat that we can never be completely certain about such comparisons. this will allow us to resist a whole array of challenges to functionalism, e.g. inverted spectrum, zombies, etc. expect a more serious post / draft-y paper in about a month….

***

meanwhile, let’s talk about a different issue, re: how an important debate within the field is taking shape. it was one of those rare moments that something useful philosophically has come out of twitter of all places…

in a tweet i pointed out that panpsychism isn’t really taken seriously by scientists, as some may get that illusion by reading online stuff these days. Victor Lamme challenged me to do a poll. that led to some discussions. turns out, Victor isn’t a panpsychist. surprise, surprise. and in fact, i’m not sure who really is, within the scientific community…. apart from a few *really* far out folks.

but Victor isn’t a functionalist like i am either. by functionalist i don’t restrict myself to specific versions of it; i definitely do NOT think consciousness supervenes on ‘long-arm’ functional properties only. basically, i include for consciousness anything that can be done using software / algorithms. so any computational, representational properties may be relevant. i just don’t think the specific substrate for implementing the software is ultimately that crucial / irreplaceable.

so Victor decided to call himself a biopsychist, i.e. he believes that for a creature to be conscious, the relevant substrate needs to be biological. in fact, he thinks that most if not all living organisms are conscious!

i thought it’s a new name, biopsychism. but very quickly i stood corrected. as Evan Thompson pointed out, the term at least traces back to Ernst Haeckl in 1892.

i really like this way of setting up the debate: biopsychism vs functionalism (broadly construed, as i described above). this really gets at the heart of the issue, a sensible divide within the field, with very legit people on both sides. none of that far out / strawman stuff.

happily Ned Block seems to approve of the term. of coz he’s a biopsychist *of some sort*. he has long argued against functionalism and representationalism. something about the substrate is what really enables conscious experiences. does he think all biological organisms are conscious? probably not. so we should distinguish between some different versions of biopsychism. below i’ll do so, and also highlights some problems i see, even with rather weak versions of biopychism. despite these misgivings, unlike my stance on panpsychism as a scientist, i think biopsychism a position worth taking seriously.

***

the way Evan Thompson defines it, biopsychism states that all and only all living organisms are conscious. that is pretty strong. we can call this strong biopsychism or something. i think it’s quite unlikely to be true. (Evan gave a paper at PSA a couple of years ago, you can email him to ask for a copy; Peter Godfrey-Smith has also written on something related here)

the reason is, i think it’s pretty clear that even simple perceptual experiences involve fairly complicated computational processes that may critically depend on areas of the brain that are matured late in development and evolution, e.g. areas in the prefrontal cortex. a very simple living organism is not gonna have that.

but my take on the empirical matter aside, there is also a pretty damning conceptual problem. so let’s say you’re seeing two different images, a cat and a monkey, in binocular rivalry. when you are consciously seeing a cat, your cat-representing neurons fire. your monkey-representing neurons fire relatively little, as you are not conscious of the image of the monkey. but now, aren’t these monkey-representing neurons biological and alive?

this is related to what is called the 'combination problem', which is something that panpsychists also have to deal with. ultimately, a strong biopsychist will have to say something like, ok, so when you are not consciously seeing the monkey image, the monkey-representing neurons are not signaling consciousness for you. but they are themselves conscious. you just don’t feel their consciousness even though they are in your head.

this leads a rather hilarious way to make Ian Philips happy, i guess: there really is no unconscious perception, as Ian has argued. unconscious perceptual processes are conscious after all, but only becoz everything in the brain is conscious, with or without you~ this is a scenario rather different from the one Ian has argued for, of coz. he’s a reasonable guy. this, on the other hand, seems….. rather weird.

***

so some may hold a weaker version of biopsychism, and say, not all biological organisms are conscious. but if a creature is conscious, it must be biological. the relevant substrate can’t be replaced by something non-biological and yet functionally similar. if you replace it, the subjective experience will be gone, even if the subject behaves somewhat similarly. Ned is likely a biopsychist of this sort.

i am not sure about even this weaker version. because in biology, we look for mechanisms. not magical substrates. let’s say people found that consciousness requires a certain pattern of activity, involving some particular type of neuron, with certain transmitter receptors. then the scientific question to ask is what does it *do*. to the extent we figure out what does it do, why can’t we write down the computational algorithm that would mimic exactly what it does? then we should be able to replace it with something exactly functionally equivalent. if it does the same thing, exactly, and yet consciousness is missing... this just sounds like magic. and how are you ever gonna know?

but Bryce Heubner and Evan convinced me that there may be something to it. the idea is, yeah sure you can try to artificially mimic a biological mechanism. but the mechanism may be so inherently biological, that it involves implementing it in the right bio-habitat, letting it ‘survive’ on its own, do its metabolic work, etc. in that sense, yes, you may try to mimic it, but by the time you succeed…. maybe it isn’t so crazy to say that the artificial replacement is basically just as alive and biological.

i’m still not totally sure, but it’s true that in the old days, we talked about multiple realizability as if it is commonplace. given the same function, we can implement it any way we want. but increasingly, i think people do recognize that multiple realizability is not as common as we thought. often a mechanism can really only be implemented exactly and most efficiently in just one way.

***

so i’m still not sold, but i think it would make an interesting debate. and this may help the field move away from all those distractions we’ve seen in recent years too!

thinking ahead…. i imagine what kind of biopsychic i would become, if i were to eventually come round to it. i suspect the requirement for higher ‘cognitive‘ functions, e.g. those in the prefrontal cortex, is unlikely to give. if this turns out to be empirically wrong, that’s that. but i’m fairly sure for now. if anything, i may come to accept that: if we are to implement those higher functions as the way they work exactly, in conscious creatures like ourselves, yeah, maybe you’d end up having to do something kinda biological. perhaps we can call a person holding such a position a high functioning biopsychist. i’m not one yet, but i got the feeling that my good friend and co-author Richard Brown may be one!

Friday, April 24, 2020

mental imagery & the intuitive appeal of qualia

since i expressed my sympathy for illusionism on twitter (@hakwanlau) a couple of months ago, friends have questioned my loyalty to sanity, and asked if i have become **one of those philosophically-ignorant scientists** who deny that consciousness is a real phenomenon, without ever bothering to understand what such denial even means. i have very much enjoyed the exchanges. :-)

in a previous post i've explained a bit why exactly i see some promise in illuionishm. to me it's all about whether you can have a somewhat plausible positive story about how the illusion comes about. so here it is, as promised. it's a bit long (for a blogpost). i recommend you pair this with a glass of red wine. maybe Pinot Noir. or some cheap Merlot would do too.

***

if you’re lucky like i am, who speaks a non-Indo-European language as a first language, you might have found it hard to explain to people what we mean when we say we study consciousness. worse still are concepts like phenomenal consciousness or qualia. even for native English-speaking folks, these concepts aren’t immediately obvious and intuitive at all. so when people say certain views on consciousness are part of our unshakable intuitions, it is worth asking - whose intuitions? if it takes formal definitions to even introduce these basic concepts, perhaps the relevant ‘intuitions’ are just a direct result of our loading the dice in our definitions. so we need a better, more neutral way to get at this.

for English-speaking common folks, Nagel’s famous phrase often connects. this much i’ve learned from teaching undergrads who aren’t philosophy majors, especially those who have no inclination to ever become so. you can explain to them: you know, there is something it is like to have certain brain processes going on. like when you taste soy sauce, your brain doesn’t just  recognize what it is, there is something it is like for you to have that experience. but for some other brain processes, it just goes on without your noticing it. there is nothing it is like to have those brain processes going on in your head.

in my experience, with some patience, most English-speaking students can get this. but what exactly does the phrase really mean to them? that may be less clear. does it mean there really *is something* it is like to taste soy sauce - like, it is a *thing*? and what is a *thing* anyway? is money a thing? are Wednesdays a thing? certainly we don’t want to say there aren’t ever Wednesdays. but can we define Wednesdays in purely physical terms?

we can get ourselves into all sorts of trouble when we engage in this kind of thinking. instead, i find it more useful to think through what plausibly could actually go through their minds, when my students are asked to think about what it’s like to have certain experiences.

thinking about what it’s like

when we think about what it’s like to taste soy sauce, i take it that we just imagine having that experience. if we succeed in such imagination, we say: yeah there is something it is like to taste soy sauce. if i press you further: so what’s it like? you may say… well, it’s a bit salty, like sea water, but more viscous, with some taste of umami (if you know what it is), a bit almost like seafood, or mushrooms, but not quite. or anyway, it is tastier than sea water. it has more flavor.

that is to say, you compare it with other experiences you summon into your imagination. perhaps this makes sense, becoz, why else would you want to think about your experiences anyway. i suppose our brains aren’t designed for doing philosophy in the first place. usually, when you perceive something and you focus on it, you just end up thinking about things you perceive. you don’t really think about the experience itself. not often anyway. unless, you try to compare it with other experiences: hm… do i like the *taste* of this steamed fish with soy sauce? or would it have been better still if i added more ginger and scallion? how about cilantro? now, that is something worth thinking about. we should all have the brains to do that.

also, it would be a bit weird for people to say: yes there is something it is like to have a certain experience, and yet they can’t say anything at all what it is like in comparison with some other experiences. that experience would have to be really unique. and by pointing out there’s nothing like it, we also get a sense of how strange it must be anyway. 

perhaps this is the whole point in thinking about experiences. we put them in the space of imagination so as to compare them, concurrent experiences and summoned memories alike.

so when we try to get our undergrad students into the topic of consciousness, and ask them to think about whether there is something it is like to have a certain experience, i take it that this is all they do. they imagine having that experience, in the way they usually do when they compare these experiences in their imagination. 

knowing what it is like

in philosophy, of course, we also speak of whether one knows what it is like to have certain experiences. but again, what does it mean? is there really something to know, like a piece of knowledge, like someone’s birthday?

again, why not consider concrete everyday examples. when my senior students are asked: do you know what it is like to stay up drinking all night? the answer is either yes or no. for most people who say yes, some know because they have done it. but some others who haven’t done it may tentatively say, i think i know. that is, maybe they have tried drinking a lot over a long time, maybe during the day. (not that i recommend any of it). and, they may also have stayed up all night before. so they think, well, it’s just both of these experiences added together, no?

but are they correct in thinking that they may know? well, the proof will be in the pudding, like they say. so they may finally try it, and say: ah, this is exactly what i always thought it would be. or they may say: i was so wrong, this is much worse than i thought.

that is, if one is imaginative enough, and if one has experience with some related or similar experiences, one can certainly know what it is like to go through some experience without actually ever having it; David Hume was right when he talked about that ‘missing shade of blue’. to *know* what it is like to have certain experience is just to be able to anticipate having that experience, such that when you have it there will be no surprises. you can also compare that anticipated experience with other experiences in your imagination. at least that’s what we usually mean. and it is a very useful ability to have - for otherwise how else do we know whether we should accept invitations to stay up drinking all night? we’d better be able to anticipate what it would be like.

intrinsic qualia

if the above is more or less correct about how my undergrads think when they are introduced to these concepts…. and assuming there’s a chance some of them may become philosophy professors one day too (!) …. well, then, i get why certain technical theoretical posits related to consciousness may seem plausible to some, so much so that some people say certain things are ‘intuitive’ in the literature.

we can call one such posit intrinsic qualia, which are the private, ineffable, intrinsic properties of experience that are immediately apprehensible. 

so consciousness is the general phenomenon that there is something it is like to be in some mental states. some people call it phenomenal consciousness these days, though that term can also refer to some specific theoretical notion rather than just the general phenomenon. 

assuming we get the general notion through all of these talks of “what it is like”. why would we accept such a specific theoretical posit as intrinsic qualia? well, that’s becoz, given the way we think about what it is like, these qualia may seem harmless enough.

first, of course, some properties of our experience are private. makes total sense. if it is all about my own imagination, how are you ever gonna know mine? 

the important point here is i’m not saying your conscious experience *is* your imagination of that experience. but when you think about your conscious experience, when you think about what it is like to have that experience, perhaps all you do is to imagine having that experience. from there, one may - mistakenly or not - find it *intuitive* to think of properties of imagination as properties of the experience itself.

and then, of course, the content of one’s imagination may seem ineffable too. our language may just not have that fineness of grain. a lot can be going on. it outstrips our limited vocabulary.

and of course, one’s imagination is naturally immediately apprehensible. that’s why we imagine it in the first place. it’s all for our own appreciation. 

the trickiest - and also most critical - is whether some properties of our experience are intrinsic. and what does it even mean? not all philosophers agree on this usage, and in fact some disallow it, but some use the term intrinsic to mean that these properties are not relational, that they cannot be defined in more simple terms. they are what they are. they can’t be characterized fully in terms of something else.

perhaps the following is what people find plausible: since my imagination of experiencing soy sauce is certainly private, mine isn’t necessarily the same as yours. we can’t point to the same bottle of soy sauce and say: hey it’s the same stuff we’re tasting. my imagination of how it tastes like is up in my head, not yours. there must be something unique up there that isn’t necessarily gonna be anything like yours.

but of course, at the end of the day, you two are tasting the same soy sauce. you two are possibly just having the same experience of the same thing. that may be all there is.

but all the same, becoz we think of it in terms of our own imagination, we may well find it plausible that something intrinsic is involved. it’s not just about the soy sauce itself. it’s not the taste buds either. you can do the imagination without either of them being there at the moment. so it must be it’s something more abstract, something special, unique to oneself, up there in the mind itself.

so from there, we may find this notion of intrinsic qualia somewhat plausible. intuitive even, perhaps.

and of course, once we accept qualia as such, all sorts of metaphysical problems arise. if qualia are intrinsic, by definition no functional analysis of it will be possible. we can’t just talk about what our perceptual system represents, coz there is something inside that ultimately matters, in ways that can’t be understood from the outside. a piece of computer software will never have such magical stuff as qualia. something will always be missing.

phenomenal concepts

but what if - just what if - our minds are actually nothing but a piece of software, running on a biologically instantiated computer? a very special piece of software, mind you. so special that we are capable of imagining having certain experiences at will. that is, we can drive our perceptual programs in a purely top-down fashion, even though they are originally meant for bottom-up sensing of external information. but still, it’s really just a piece of software, implemented somehow using some organic stuff. so, there are really no intrinsic qualia as such. but we mistakenly think there are, becoz we can’t really think deeply about our experiences, other than imagining having them. that is, when we are asked to refer to the experience in our thought, we typically just simulate the whole thing, and mistake some properties of the simulation as the properties of the experience itself.

that would be somewhat congruent with two major / popular strategies for addressing some ‘classic’ philosophical problems of consciousness, e.g. the explanatory gap, the so-called Hard Problem, the Knowledge Argument etc. the idea is there may be an epistemic gap, but it doesn’t mean there is a metaphysical one. in other words, we only *think* that there is a problem because of the way we think about consciousness. 

on one such strategy, so some philosophers say this is because when we think about consciousness we deploy phenomenal concepts. these concepts refer to the perceptual states that are actually just physical representations; no magic involved. but when we use these concepts to think about the physical perceptual states, we think about them in terms of subjective experience. from there, we are somehow blocked from seeing that these perceptual states just are the same representations as they are described in physical terms.

many people aren’t sure if we really have these phenomenal concepts, or how they are supposed to work. sure one can think about the same stuff with different concepts. i can think of myself as my parents’ naughtiest child (true), or the author of these words (also true). these are different concepts, which refer to the same thing - me. and sure enough, if i think about myself in terms of my authorship here, i may not realize it is the same person as my parents’ naughtiest child; i may mistakenly think that my sister was the naughtier one. but the thing is, once we are explained in no uncertain terms how the two notions are related, we no longer have trouble connecting them. 

but when it comes to phenomenal concepts, for them to work as they should, they need to be stubbornly opaque. even when we are told that, actually, that red experience is really just these neurons firing, we can never cross that bridge and get to genuinely appreciate that they are the same thing. that part remains somewhat magical.

so perhaps, thinking about this in terms of imagination may help to flesh out what these phenomenal concepts really are. when we think about the experience of tasting soy sauce, we simulate that experience in our imagination. that’s how the so-called phenomenal concepts are deployed. perhaps then it makes sense that simulating having the experience is gonna feel rather different from describing the experience in mechanistic terms. for a start, simulating it actually is an experience in itself. it feels like something. describing it doesn’t feel quite the same.

mental quality in imagery

this may seem to get us into a bit of a funny chicken and egg problem. so thinking about an experience is an experience in itself. but, we were hoping to say that having an experience is no magic - specifically, by magic we mean something intrinsic that can’t be explained in purely functional / software terms. but now, maybe in thinking about an experience, because it feels like something, there is magic after all? 

the answer is no. so let’s focus on vision for a moment. the story i’ve been trying to tell is: when we think about the experience of seeing a cat, we typically create the mental image of a cat. now then, of course there is something it is like to have the mental image of a cat. there need not be magic. one way to go about this may be to say: well, there is something it is like to have experience X just when we can think of what it is like. maybe all that is, like what i said earlier, is that you succeed at imagining having that experience. so i certainly can imagine having the mental image of a cat. when i do so…. i sort of just end up having that mental image again. i can refer to it, compare it with other experiences, etc., so maybe that’s all there is.

another thing is that imagery and conscious perception share common / similar mental qualities. so these qualities may be defined roughly in terms of how a certain stimulus can be distinguished perceptually by the subject against other perceptible stimulus (cf Rosenthal on ‘mental quality space’). so a cat looks the way it does, because it looks more like a small tiger than a squirrel. more like a dog than a piece of rock. more like a human than a tree, and more like a tree than a giant blue triangle, etc. etc. etc. etc. (until you exhaust the whole list of all the things you can possibly see). now then, imagery has a similar quality too. an imagined cat likewise looks more like some of these same imagined things and less like some others. so mental imagery may share a similar structure of mental qualities too. and having these qualities maybe the crux of all this talks of ‘what it is like’ - it’s really about whether you can compare it with other things you see. there’s just no absolute, pure conscious sensation that can’t be compared against anything else.

so we already accepted that there is something it is like to see something, and there’s no magic. well, imagery shares some similar mental qualities. and in both cases, you can think about what it is like to have them, by imagining having them. so perhaps there’s no magic there in either case. mental imagery is likewise just generated by some software implemented in the brain.

one problem is: mental imagery and actual perception are only somewhat similar. but they are also somewhat different in terms of how they feel. they certainly aren’t exactly the same. the question is, how so? my answer has two parts. first of all, when you actually see something you see it as being out there, present in the world. when you imagine seeing it, you don’t. it lacks this sense of presence, or assertoric force, as philosophers sometimes say. i have previously argued that this is part of the phenomenology of seeing. and to account for that difference all you need is software.

in fact, i’d go so far to say when you consciously see a cat you are basically interpreting that mental quality (i.e. the thing that looks more like a small lion than a squirrel, etc. etc.) as reflecting the present state of the world, *meaning* that it does not just reflect your own imagination. when you enjoy the mental imagery, on the other hand, you take that very similar mental quality to reflect your own imagination, *meaning* that it does not reflect the present state of the world. it’s all part of the phenomenology itself. and it may be why we need the phenomenology at all - the two modes of representations need to feel immediately different somehow, for otherwise the confusion would be dire. the implicit linkage between these two modes of representation may be what causes much of the confusion: perception may seem magical because we tend to think of it in terms of imagery, and imagery may in turn seem magical, coz it’s almost as if you’re seeing something consciously. but really it’s just the other side of the same coin. in both cases they just represent the perceptual object. there's no internal magic going on.

there is, of course, another difference between imagery and actual perception. people have suggested that imagery is less vivid, somewhat faint. i think that’s right. but other people have also argued that the notion of vividness isn’t so easy to define. i don’t find it so problematic. we can perhaps define it as how similar it looks with respect to not seeing anything. so a less vivid image of a cat is just more similar to nothing than a more vivid image of a cat. 

this way, we account for why imagery has a lower fineness of grain too. that is, in imagery it seems to be less detailed. when one thinks of a cat, it doesn’t seem to be quite so specific about its color, compared to seeing a real cat. if the cat looks stripey, there’s also the famous observation that we can’t ever really count the number of stripes on the imagined cat. but if the imagined cat is less vivid in the sense that it is more similar to nothing, than a more vivid image of a cat is…. well, then, by rather simple psychophysical principles we can understand why it is not so distinguishable from other images of cats in imagery space. the idea is simple: take two gabor patches or slightly different orientations. as the luminance contrast decreases, they become less distinguishable. so vividness may be a bit like contrast, in that it reflects how strong the signal is, and there is a clear point of zero which refers to the absence of any stimulus. when the signal is tiny, as in imagery, you can’t distinguish things very well. but it still roughly has the same content, just less clear.

illusionism

so we addressed why there is something it is like to engage in mental imagery. there need not be any magic involved, just like there need not be magic involved in normal perception either. and there may be some shared qualities between imagery and actual perception. but does it mean that imagery (or more generally, imagination in any other sensory modality) is the correct way to think about the experience?

earlier i mentioned that there are two major / popular strategies for dealing with those ‘classic’ philosophical problems of consciousness. one is the phenomenal concept strategy. the other is illusionism. the two strategies are often seen as mutually exclusive, or in competition somehow. but i don’t really think that way. they are different, but the difference may be rather subtle.

so on illusionism, when we think about our experiences, we are actually somewhat mistaken. how bad that mistake is … depends on the version of the theory. one version is ultra-strong illusionism, which nobody really believes. it’s just set up as a strawman by its opponents. according to this version, we only think we are conscious, but we never really are. not in any way. but nobody really believes that version of illusionism, not really. 

on weak illusionism, which even some people who openly hate illusionism endorse, we are only mistaken in thinking that there is qualia. but we aren’t wrong in thinking that we are conscious somewhat

so, obviously, the question is how we are conscious somewhat, exactly. so we’ve agreed that there’s something it is like to be in some mental states, e.g. seeing a cat. it is so, in the sense that the question of - so what is it like? - make sense. there is something to be said about it. on the other hand, if i ask you: so what is it like to have some of your V1 neurons firing at 10hz, below perceptual threshold? it should draw a blank. there is nothing it is like. not just becoz of the way i posit the question, assuming you know some neuroscience. if i actually stimulate your brain that way, you also wouldn’t be able to tell me what it is like. there is just nothing it is like to be stimulated that way. so, some states are conscious, some aren’t. that’s a meaningful distinction worth keeping.

but the question is, when we think about what it is like to see the cat, are we thinking about the experience itself, really? or are we just engaging in a different activity altogether, i.e. summoning a mental imagery, which shares some aspects but not all aspects with the experience itself? if imagery is a valid way of thinking about the experience, then the phenomenal concept strategy is ok. but if imagery is just a way to think about something close enough to the experience, thereby allowing us to do some thinking about the experience, approximately… well, then, this may be good enough for evolutionary purposes, but you aren’t really thinking about the experience itself. not exactly.

this is not at all to say you are mistaken in thinking that you’re conscious. you think you are and i respect that. who am i to say otherwise. you say there is something it is like to see a cat, and i believe you. there probably is indeed something it is like, as opposed to nothing, as in the case of subthreshold brain stimulation mentioned above. but when you think about what it is like, perhaps you aren’t really just thinking about the experience of seeing a cat - not exactly. you also invoke something else. you’re simulating it, rather than just thinking about it. maybe that’s becoz there’s just no other way for you to think about *it*, other than to simulate it. even when you are seeing the cat right now, if you focus on it you just think about the cat. to think about the experience itself is an awkward thing, and perhaps it can really only be done via something like imagery. but a mental simulation is a complex process. it maybe somewhat similar to the experience itself, but also somewhat different. so you may very well be mistaken about some aspects of it. like all those qualia stuff, which ultimately lead to all of those age-old problems of consciousness.

this is a bit like saying, there’s no way for me to think about the pandemic without getting deeply emotional about it. this is especially the case when i think about the various politicians’ reactions to it, including those who exploit the ‘opportunity’ as a moral free-for-all. but my feeling emotional may have nothing to do with the pandemic itself. it’s just me. but i can’t help thinking about it this way, and this may color my thinking about the pandemic. if i ended up finding it intuitive that there’s something emotional about the virus itself though, i would be very much mistaken.

aphantasia

so we now know why we may be somewhat mistaken in thinking about consciousness - something i called ‘illusionishm’ in a previous blogpost - coz exactly how mistaken we are is…. well it is what it is. it does not deny that there is something it is like to have certain brain processes going on in your head. but it denies qualia. 

and i have given an account of why we may be so mistaken to think there are such things as qualia too, which give us all the troubles. ah well, maybe an account is a bit of an exaggeration. it’s more like a just-so story. but a story is better than no story.

it has not escaped my imagination that the foregoing implies some predictions about aphantasia, the condition where some people may not have any visual imagery. if a theoretical view makes no new predictions, it is somewhat worthless (to me). so take this as a ‘pre-registration’ of the most informal kind: we expect that aphantasics may not have the same intuitions about qualia, specifically in the sensory modality in which they show aphantasia. we are on it. we are the empirical folks after all.

stay tuned.

***

i will try to reply to comments and questions on twitter - https://twitter.com/hakwanlau/status/1253779189527834624?s=20