Saturday, January 27, 2018

reflections on machine consciousness (recent Science paper with Dehaene & Kouider)

when that paper came out, a good friend commented: "it didn't sound like you."

my reply was, ah well, i was just a middle co-author.... :-)

more to the point is, the main idea of writing about AI & consciousness did come from Stan first, and in many ways, i think overall his voice is more prominent in this collaborative effort than mine. Sid has worked with Stan before, so i assume they share many common views. my views are well known to be quite different from Stan's on many issues (will explain more in the next post). arguing over all of them would have been quite some work, but more than that, the trouble is space. the nature of this sort of general reviews is for a broad audience. so much details and intricate stuff are left out. of course there are many things i don't necessarily agree with Stan and Sid, and at times we feel it is more important we put aside our differences. in terms of this i think we have done quite well.

there are still a lot of stuff i hate to have left out. for instance i say this elsewhere too: Victor Lamme's local recurrency view and Ned Block's interpretation / variant are definitely important forces to be reckoned with. but i think ultimately we left them out becoz these views have the core assumption that subjective experience is constitutively disconnected from cognition. in the context of thinking how one many build a machine that will be conscious, they aren't much useful. still i would have liked to have discussed this delicate point more clearly, if we had the space. just becoz the theory isn't useful here if one were to build a conscious machine, doesn't mean the theory isn't true. i independently think the theory is wrong (based on empirical reasons), but if it were true, the whole business of building conscious machines will get very tricky if not downright impossible.

but anyway, i tried to cite as much stuff i think are worth citing as possible. in the end, space limitations means i didn't cite much of my own work at all. in hindsight perhaps i really should have pushed to cite this piece of using decoded neurofeedback to manipulate metacognition, for instance. but anyway, i truly believe that others' work are important too. e.g. this i'm very happy to have cited. so you see, i'm torn, and i've tried. if you feel unfairly ignored, i'm sorry; i'm sure some of my own students and postdocs feel the same way too.

so overall, i wouldn't say i totally took the backseat in this exercise. there are specific ideas that i am happy to have brought to the table: the idea that perceptual reality monitoring may be a particularly important aspect of C2 processing; the relationship between that Generative Adversarial Networks, in particular how currently they don't address perceptual reality monitoring and this may well be crucial; the argument that empirically losing C1 & C2 does mean you lose subjective experience, for example.

naturally, others commented, and we had to respond. some of the ideas that i insisted on including (as listed above) became handy - apparently they just missed them on first reading. of coz just having some C1 and some C2 doesn't mean you are conscious - you have to have the right kind, implemented the right way, and that's exactly the point of the exercise in making the first step in mapping these out.

one anecdote is, in checking the copyedited proofs, i found that the tone of the piece was edited somewhat such that we sounded more confident than we had. i asked for it to be toned down a bit, but it may still be much....

overall, i think this may be most interesting point to some: are we really so sure as to say we already know how to build a conscious machine? no, not really. for me personally, may be not at all. you know i tend to emphasize on a more conservative, foot on the ground, empirical approach. that philosophy remains. but the issue here is that the question of machine consciousness will no doubt arise. currently, the so-called more mathematical theories are, to my mind, non-starters. it would be nice if we can agree on how to demonstrate consciousness, a Turing test of sort for qualia. would be nice .... but good luck. so all we are suggesting is, instead of going for some abstract theoretical proof or consensus that we'll never get, why don't we start with the brain? like, a good ol' empirical science approach - as an alternative, realistic solution to this pressing issue that is for sure going to arise sooner or later? if we understand enough about brain computations, and know which kinds are conscious which kinds are not, we can say meaningfully say how much an artificial cognitive architecture mimics human consciousness. ultimately, that may be the best we can do - just as how we may be able to say whether some animals are conscious. all of this means: the NCC project is not done. more empirical work is still needed. but we would do well to focus on this first rather than something else - something wilder.

in this sense, this doesn't sound like a very strong revelation. in fact, i would say it sounds almost boring / commonsense. then why take the trouble to write this? i guess the 3 of us all feel that, there is some urgency to this becoz in recent years the field might have shown signs of slipping off to a different direction. as i have expressed here earlier,  such theoretically-indulgent direction may be extremely dangerous. to me, it is a problem of some urgency. we are at such crossroad, right here. i'd like to think this is what brought Stan and i together, which, i have to say, sappy it may sound, warms my heart. because as some of you know, Stan and i had once disagreed rather intensely ......

how much did we disagree exactly? why is the agreement of that both C1 and C2 are important and perhaps conceptually orthogonal aspects of consciousness such a remarkable progress? well, nearly 13 years ago, a foolish young man misbehaved .....  for this, we have to resume to the series On Combat....

Sunday, January 7, 2018

on combat, part 1

(to those of you who just wanted to know why i asked the anatomy question the other day re: where to place the central sulcus, feel free to skip the first part and go straight to the apart below ***)

happy 2018~

before xmas i said i would talk about what it means to argue, sometimes rather intensely and heatedly, with close colleagues & friends. so here it is, a multi-part essay on the pros and cons of this sort of academic 'lifestyle', contra being a civilian, where does all this come from, and all that.

i use not to know how to call it. aggressive? critical? contentious? argumentative? play fighting? as is often the case about finding the right words, i think Ned Block has it - 'combative' is the word i borrow from him here.

although Ned is my grand-teacher (he taught Joe Lau, who taught me most of the philosophy i know), we hardly ever see each other eye to eye re: our own views on consciousness. but it is exactly through arguing with him over and over that i've learned so much. he is by far my favorite debate opponent. someone i can count on to catch the kitchen sink with a smile when you throw it at him as hard as you can - and then he'll give you a reply that got you thinking for weeks. no offence will ever be taken personally becoz we both know this is good for you - if you care about getting things right.

in a way, i'd like to think of it as a form of humility. given how hard the problem it is that we're dealing with, how little data we have (because we aren't really funded by the mainstream national agencies, at least in the US), the only way to know if our theory is any good is by testing it through harsh criticisms. we have to learn not to be annoyed by our critics, however nitpicky or destructive we feel they are. the devils are often in the details. & if you can't stand this sort of arguments, are prone to take things personally, you just have no business in studying consciousness.

one funny thing is, if you do this kind of arguing often and for long enough, you also get to develop a certain kind of taste. you'll know when some views are just not solid enough, from the sheer smell of it. you know if you go poke it, it would just fall apart. no number of big equations can mask sheer nonsense, or superficial trivial rubbish. for this reason, i do enjoy chatting with Ned a lot about other people's work. there, we usually agree, ironically. seeing him in conferences provides much moral support - "yeah that stuff is BS... such a shame that it came out at a keynote." ... but meanwhile we still won't ever agree about our own views.

besides Ned & Joe, my doctoral father (and postdoc advisor) Dick Passingham has a great influence on me too, of coz. though he never fully approved of my philosophical bent, he himself studied philosophy as an undergrad. i'd like to think his style of argumentation often has the same flavor. we destroyed each other's views and ideas on a regular basis. there was not ever a tough question too 'nasty' to be asked - one thing i learned as a young grad student at Oxford was, so long as you ask it in a posh voice, and ask it at the end of the seminar talk rather than interrupting people in the middle, the tougher the question was, the better. it was a real disappointment to learn later on that such culture was not universal across departments and countries. as an awkward foreigner, doing that was at at some point somewhat easier and more enjoyable than chatting with people 'casually' in the pub, where the conversations can go into any directions, with cultural references that i didn't have clue about.....

***

i have mentioned previously that this commentary on IIT was one of the more 'serious' (as in provocative) pieces of writing i've done last year. but actually, thinking more about it, i think this paper with Brian Odegaard & Bob Knight was the real deal. maybe not just for 2017, but in recent years. funny though, the 'opponents' are the same people (who defends IIT, i.e. Koch, Tononi, Tsuchiya, Boly, etc).

we knew already for some time that if we don't ask people to press buttons to report, simple fMRI activity decreases in the PFC. that's not really new & it doesn't mean that the area isn't still doing something meaningfully re: conscious perception, as quite easily measurable by other methods (including just slightly more modern ways of doing fMRI, such as MVPA).

unfortunately, the confusion grew in the literature again in the past few years, in ways that i think are just getting out of hand. i have defended the role of PFC in consciousness previously, but am really not sure if it is so easy set the record straight this time. along with catchy phrases like 'no report paradigm', several claims that are empirically misleading/plain wrong (e.g. PFC activity does not reflect perceptual content) seem to have caught on, as they appeared in multiple high profile reviews.

i hope Brian's piece will do some good in clarifying the issues. it is by far one of my favorites of all papers coming out from my lab recently. tighly argued, concise, fair, thorough. i certainly couldn't have done a better job myself if i did it solo. it makes me happy & proud that Brian has taken our 'family' style of writing & argumentation and developed it into something better - something more mature. i hope Joe & Dick will both see their influences there too. (ok, enough self praise, please go read it yourself).

and of course, i've been corresponding with Melanie (Boly) about this too. i'd like to think overall we have the better arguments. but as usual, the devils are in the details. one case study comes down to where we place (i.e. label) the central sulcus......

see below: which of the two ways of labeling the sulci strike you as more plausible?

Inline image 1

... the above brain has gone through frontal lobectomy so we can't estimate where the central sulcus is from the front end; we don't know how much was cut. but from the parietal end we should be able to tell where it is more likely to be central sulcus. i recommend you make a choice between the two first, and then read Brian's paper, and focus especially on Boly et al's comments near the end of our piece.

it does get hairy, doesn't it? and this is just about the sinlge case (reported by Brickner). such is the way of science. what worries me is: if we as a field can't resolve such relatively straightforward empirical issues, what business do we have in talking about more slippery things like IIT?

the debate goes on...... & if you worry that this is getting a little intense, or that some of us may take this personally, you probably understimate us as a field..... (to be continued)