when that paper came out, a good friend commented: "it didn't sound like you."
my reply was, ah well, i was just a middle co-author.... :-)
more to the point is, the main idea of writing about AI & consciousness did come from Stan first, and in many ways, i think overall his voice is more prominent in this collaborative effort than mine. Sid has worked with Stan before, so i assume they share many common views. my views are well known to be quite different from Stan's on many issues (will explain more in the next post). arguing over all of them would have been quite some work, but more than that, the trouble is space. the nature of this sort of general reviews is for a broad audience. so much details and intricate stuff are left out. of course there are many things i don't necessarily agree with Stan and Sid, and at times we feel it is more important we put aside our differences. in terms of this i think we have done quite well.
there are still a lot of stuff i hate to have left out. for instance i say this elsewhere too: Victor Lamme's local recurrency view and Ned Block's interpretation / variant are definitely important forces to be reckoned with. but i think ultimately we left them out becoz these views have the core assumption that subjective experience is constitutively disconnected from cognition. in the context of thinking how one many build a machine that will be conscious, they aren't much useful. still i would have liked to have discussed this delicate point more clearly, if we had the space. just becoz the theory isn't useful here if one were to build a conscious machine, doesn't mean the theory isn't true. i independently think the theory is wrong (based on empirical reasons), but if it were true, the whole business of building conscious machines will get very tricky if not downright impossible.
but anyway, i tried to cite as much stuff i think are worth citing as possible. in the end, space limitations means i didn't cite much of my own work at all. in hindsight perhaps i really should have pushed to cite this piece of using decoded neurofeedback to manipulate metacognition, for instance. but anyway, i truly believe that others' work are important too. e.g. this i'm very happy to have cited. so you see, i'm torn, and i've tried. if you feel unfairly ignored, i'm sorry; i'm sure some of my own students and postdocs feel the same way too.
so overall, i wouldn't say i totally took the backseat in this exercise. there are specific ideas that i am happy to have brought to the table: the idea that perceptual reality monitoring may be a particularly important aspect of C2 processing; the relationship between that Generative Adversarial Networks, in particular how currently they don't address perceptual reality monitoring and this may well be crucial; the argument that empirically losing C1 & C2 does mean you lose subjective experience, for example.
naturally, others commented, and we had to respond. some of the ideas that i insisted on including (as listed above) became handy - apparently they just missed them on first reading. of coz just having some C1 and some C2 doesn't mean you are conscious - you have to have the right kind, implemented the right way, and that's exactly the point of the exercise in making the first step in mapping these out.
one anecdote is, in checking the copyedited proofs, i found that the tone of the piece was edited somewhat such that we sounded more confident than we had. i asked for it to be toned down a bit, but it may still be much....
overall, i think this may be most interesting point to some: are we really so sure as to say we already know how to build a conscious machine? no, not really. for me personally, may be not at all. you know i tend to emphasize on a more conservative, foot on the ground, empirical approach. that philosophy remains. but the issue here is that the question of machine consciousness will no doubt arise. currently, the so-called more mathematical theories are, to my mind, non-starters. it would be nice if we can agree on how to demonstrate consciousness, a Turing test of sort for qualia. would be nice .... but good luck. so all we are suggesting is, instead of going for some abstract theoretical proof or consensus that we'll never get, why don't we start with the brain? like, a good ol' empirical science approach - as an alternative, realistic solution to this pressing issue that is for sure going to arise sooner or later? if we understand enough about brain computations, and know which kinds are conscious which kinds are not, we can say meaningfully say how much an artificial cognitive architecture mimics human consciousness. ultimately, that may be the best we can do - just as how we may be able to say whether some animals are conscious. all of this means: the NCC project is not done. more empirical work is still needed. but we would do well to focus on this first rather than something else - something wilder.
in this sense, this doesn't sound like a very strong revelation. in fact, i would say it sounds almost boring / commonsense. then why take the trouble to write this? i guess the 3 of us all feel that, there is some urgency to this becoz in recent years the field might have shown signs of slipping off to a different direction. as i have expressed here earlier, such theoretically-indulgent direction may be extremely dangerous. to me, it is a problem of some urgency. we are at such crossroad, right here. i'd like to think this is what brought Stan and i together, which, i have to say, sappy it may sound, warms my heart. because as some of you know, Stan and i had once disagreed rather intensely ......
how much did we disagree exactly? why is the agreement of that both C1 and C2 are important and perhaps conceptually orthogonal aspects of consciousness such a remarkable progress? well, nearly 13 years ago, a foolish young man misbehaved ..... for this, we have to resume to the series On Combat....
my reply was, ah well, i was just a middle co-author.... :-)
more to the point is, the main idea of writing about AI & consciousness did come from Stan first, and in many ways, i think overall his voice is more prominent in this collaborative effort than mine. Sid has worked with Stan before, so i assume they share many common views. my views are well known to be quite different from Stan's on many issues (will explain more in the next post). arguing over all of them would have been quite some work, but more than that, the trouble is space. the nature of this sort of general reviews is for a broad audience. so much details and intricate stuff are left out. of course there are many things i don't necessarily agree with Stan and Sid, and at times we feel it is more important we put aside our differences. in terms of this i think we have done quite well.
there are still a lot of stuff i hate to have left out. for instance i say this elsewhere too: Victor Lamme's local recurrency view and Ned Block's interpretation / variant are definitely important forces to be reckoned with. but i think ultimately we left them out becoz these views have the core assumption that subjective experience is constitutively disconnected from cognition. in the context of thinking how one many build a machine that will be conscious, they aren't much useful. still i would have liked to have discussed this delicate point more clearly, if we had the space. just becoz the theory isn't useful here if one were to build a conscious machine, doesn't mean the theory isn't true. i independently think the theory is wrong (based on empirical reasons), but if it were true, the whole business of building conscious machines will get very tricky if not downright impossible.
but anyway, i tried to cite as much stuff i think are worth citing as possible. in the end, space limitations means i didn't cite much of my own work at all. in hindsight perhaps i really should have pushed to cite this piece of using decoded neurofeedback to manipulate metacognition, for instance. but anyway, i truly believe that others' work are important too. e.g. this i'm very happy to have cited. so you see, i'm torn, and i've tried. if you feel unfairly ignored, i'm sorry; i'm sure some of my own students and postdocs feel the same way too.
so overall, i wouldn't say i totally took the backseat in this exercise. there are specific ideas that i am happy to have brought to the table: the idea that perceptual reality monitoring may be a particularly important aspect of C2 processing; the relationship between that Generative Adversarial Networks, in particular how currently they don't address perceptual reality monitoring and this may well be crucial; the argument that empirically losing C1 & C2 does mean you lose subjective experience, for example.
naturally, others commented, and we had to respond. some of the ideas that i insisted on including (as listed above) became handy - apparently they just missed them on first reading. of coz just having some C1 and some C2 doesn't mean you are conscious - you have to have the right kind, implemented the right way, and that's exactly the point of the exercise in making the first step in mapping these out.
one anecdote is, in checking the copyedited proofs, i found that the tone of the piece was edited somewhat such that we sounded more confident than we had. i asked for it to be toned down a bit, but it may still be much....
overall, i think this may be most interesting point to some: are we really so sure as to say we already know how to build a conscious machine? no, not really. for me personally, may be not at all. you know i tend to emphasize on a more conservative, foot on the ground, empirical approach. that philosophy remains. but the issue here is that the question of machine consciousness will no doubt arise. currently, the so-called more mathematical theories are, to my mind, non-starters. it would be nice if we can agree on how to demonstrate consciousness, a Turing test of sort for qualia. would be nice .... but good luck. so all we are suggesting is, instead of going for some abstract theoretical proof or consensus that we'll never get, why don't we start with the brain? like, a good ol' empirical science approach - as an alternative, realistic solution to this pressing issue that is for sure going to arise sooner or later? if we understand enough about brain computations, and know which kinds are conscious which kinds are not, we can say meaningfully say how much an artificial cognitive architecture mimics human consciousness. ultimately, that may be the best we can do - just as how we may be able to say whether some animals are conscious. all of this means: the NCC project is not done. more empirical work is still needed. but we would do well to focus on this first rather than something else - something wilder.
in this sense, this doesn't sound like a very strong revelation. in fact, i would say it sounds almost boring / commonsense. then why take the trouble to write this? i guess the 3 of us all feel that, there is some urgency to this becoz in recent years the field might have shown signs of slipping off to a different direction. as i have expressed here earlier, such theoretically-indulgent direction may be extremely dangerous. to me, it is a problem of some urgency. we are at such crossroad, right here. i'd like to think this is what brought Stan and i together, which, i have to say, sappy it may sound, warms my heart. because as some of you know, Stan and i had once disagreed rather intensely ......
how much did we disagree exactly? why is the agreement of that both C1 and C2 are important and perhaps conceptually orthogonal aspects of consciousness such a remarkable progress? well, nearly 13 years ago, a foolish young man misbehaved ..... for this, we have to resume to the series On Combat....
No comments:
Post a Comment