the past week i've been rather active on FB. i used to enjoy arguing about stuff there, from the philosophical to the political. but haven't done it for a while (specifically, pretty much since the last US election). but we posted a paper on bioRxiv recently, and it attracted some comments.
long story short: one key finding related to consciousness and the PFC is that TMS to the latter can change visual metacognition. it was done by Rounis et al (including yours truly) a long time ago. in fact that was when the psychophysical measure meta-d' was first introduced. meanwhile, multiple studies have firmly established the empirical link between PFC and visual metacognition. besides correlational observations, causal manipulations to PFC activity, e.g. lesions, neurofeedback, or chemical inactivations in monkeys, all changes metacognition. so we can take it to be a done deal; the details of the original study aren't so critical anymore.
but when Bor et al attempted to replicate it anyway, it didn't work. in a way, no surprises: the study was done long ago and perhaps wasn't set up in the most robust way possible (we didn't use Brainsight); we thought it was a long shot back then (Vincent Walsh, the expert on TMS and visual perception, told me it wouldn't ever work!) and didn't want to invest the time & resources. also, we all know in cog neuro we often lack power. so when things don't 'replicate', it may well just be a false negative. Bor et al specifically did it in a way that was not so powerful (they elected to use a between subject design).
but the funny thing is we think they actually replicated it, but just interpreted it wrong (i saw someone on twitter calling it a 'replication failure failure'): they found the effect but insisted that they needed to chuck away some subjects becoz their data were 'bad' - after which the effect was gone. so we did a simulation to see if excluding subjects is really good for you. we found that it isn't - surprisingly it doesn't really change the false positive rate re: testing the hypothesis. i.e. the validity of the stat is just as good whether you exclude subjects or not.
so a long discussion ensued, started with Bor saying that "obviously" they disagree. it really wasn't such an obvious issue to me: do you want to chuck away data just becoz they 'look bad', even when you already know that keeping them won't hurt the validity of your statistical tests? and knowing the validity wouldn't change part also wasn't trivial; took us some time to do the analysis to find out.
anyway, go see the paper and the discussion to decide for yourself.
i'm sure my co-authors are all amused by how much time i can spend on social media to argue about stuff; especially becoz in this case we already know the answer re: the role of PFC in visual metacognition, due to the many other studies. so is all this arguing back and forth really worth it? yeah, there's something strange about that. i'll explain in my next post.....
long story short: one key finding related to consciousness and the PFC is that TMS to the latter can change visual metacognition. it was done by Rounis et al (including yours truly) a long time ago. in fact that was when the psychophysical measure meta-d' was first introduced. meanwhile, multiple studies have firmly established the empirical link between PFC and visual metacognition. besides correlational observations, causal manipulations to PFC activity, e.g. lesions, neurofeedback, or chemical inactivations in monkeys, all changes metacognition. so we can take it to be a done deal; the details of the original study aren't so critical anymore.
but when Bor et al attempted to replicate it anyway, it didn't work. in a way, no surprises: the study was done long ago and perhaps wasn't set up in the most robust way possible (we didn't use Brainsight); we thought it was a long shot back then (Vincent Walsh, the expert on TMS and visual perception, told me it wouldn't ever work!) and didn't want to invest the time & resources. also, we all know in cog neuro we often lack power. so when things don't 'replicate', it may well just be a false negative. Bor et al specifically did it in a way that was not so powerful (they elected to use a between subject design).
but the funny thing is we think they actually replicated it, but just interpreted it wrong (i saw someone on twitter calling it a 'replication failure failure'): they found the effect but insisted that they needed to chuck away some subjects becoz their data were 'bad' - after which the effect was gone. so we did a simulation to see if excluding subjects is really good for you. we found that it isn't - surprisingly it doesn't really change the false positive rate re: testing the hypothesis. i.e. the validity of the stat is just as good whether you exclude subjects or not.
so a long discussion ensued, started with Bor saying that "obviously" they disagree. it really wasn't such an obvious issue to me: do you want to chuck away data just becoz they 'look bad', even when you already know that keeping them won't hurt the validity of your statistical tests? and knowing the validity wouldn't change part also wasn't trivial; took us some time to do the analysis to find out.
anyway, go see the paper and the discussion to decide for yourself.
i'm sure my co-authors are all amused by how much time i can spend on social media to argue about stuff; especially becoz in this case we already know the answer re: the role of PFC in visual metacognition, due to the many other studies. so is all this arguing back and forth really worth it? yeah, there's something strange about that. i'll explain in my next post.....
No comments:
Post a Comment