Wednesday, December 22, 2010

Annals of Neurosymbology, Volume I, Issue 1

 I recommend reading Jonah Lehrer's piece in the penultimate New Yorker:
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer
I like to make fun of Lehrer for his silly, pandering books (e.g. Proust was a Neuroscientist) and his general superficiality, but I think he does a reasonably good job here given the constraints of his science-journalist Weltanschauung.  In general, the conundrums described in the article offer a salutary reminder of various statements by Latour to the effect that the framework scientists adopt for thinking about what they do in accordance with the modern constitution makes certain things into problems that, from another angle, do not have to be regarded as such.  
-Bremselhacker

2 comments:

Sturgeron Prandleforth, DSymb (Cantab) said...

This article is excellent. I had held Lehrer in contempt because of the title of Proust was a Neuroscientist (if that book is any good, then his title is doing a lot to disguise the fact) and his appearances on that misleading, vapid, obscurantist and extremely well-produced organ of neurosymbology, obscurantism and Dawkinsian fundamentalism, Radiolab. His final paragraph is facile and trite, but we can just pretend it's not there.

"[Reforms in the reporting of study results] still wouldn't erase the decline effect. This is largely because scientific research will always be shadowed by a force that can't be curbed, only contained: sheer randomness."

This isn't the result of any investigation, but an a priori feature of the ontology of RCTs. I don't say ontology lightly and I do not mean epistemology. Lehrer's example of Crabbe's experiments injecting mice with cocaine are an allegory for clinical medicine. The "noise" and "randomness" in the research data are actually existing lives the concrete particulars of which can be expected to depart wildly from our expectations not because people operating with RCTs don't know enough, but because decisions made by constructing RCTs out of samples and statistical analyses and then constructing clinics out of RCTs and patients means that in every case the actual mice and actual patients will differ from their model, no matter whether that model is conceived as contingent and provisional or universal. I am, to be clear, not talking primarily about the way researchers or practitioners think about this, but about "matter" or, better, things or actants.

It is nothing less than completely fucking startling and totally predictable that Lehrer places the randomness outside of statistics and outside of science. Scientists are "learning more about the world" and the truth they've unearthed is that the truth is more slippery than they thought. Why? Because the world is more unstable than they thought, not because their methods guarantee an instability. What's great here is that, rather unsurprisingly from a certain point of view, statistical reasoning is discovering its own ontology in the world. And that is precisely what it means for something to be a priori. This "discovery" is the felicitous pairing of statistical ontology and actually existing large sets of data and the research apparatuses that build them.

There isn't any reason pure and simple that things are completely determinate but that we just don't have all the variables. In fact you get the impression in Lehrer's final paragraph that he actually thinks this, despite the fact that randomness is incarnate in the lab and the clinic.

I'm pretty sure I should drop everything and read Ian Hacking's books on the history of statistics.

Sturgeron Prandleforth, DSymb (Cantab) said...

NB: While posted by Dr. Benway, this comment is in fact by Twinglebrook-Hastings:
This article is excellent. I had held Lehrer in contempt because of the title of Proust was a Neuroscientist (if that book is any good, then his title is doing a lot to disguise the fact) and his appearances on that misleading, vapid, obscurantist and extremely well-produced organ of neurosymbology, obscurantism and Dawkinsian fundamentalism, Radiolab. His final paragraph is facile and trite, but we can just pretend it's not there.

"[Reforms in the reporting of study results] still wouldn't erase the decline effect. This is largely because scientific research will always be shadowed by a force that can't be curbed, only contained: sheer randomness."

This isn't the result of any investigation, but an a priori feature of the ontology of RCTs. I don't say ontology lightly and I do not mean epistemology. Lehrer's example of Crabbe's experiments injecting mice with cocaine are an allegory for clinical medicine. The "noise" and "randomness" in the research data are actually existing lives the concrete particulars of which can be expected to depart wildly from our expectations not because people operating with RCTs don't know enough, but because decisions made by constructing RCTs out of samples and statistical analyses and then constructing clinics out of RCTs and patients means that in every case the actual mice and actual patients will differ from their model, no matter whether that model is conceived as contingent and provisional or universal. I am, to be clear, not talking primarily about the way researchers or practitioners think about this, but about "matter" or, better, things or actants.

It is nothing less than completely fucking startling and totally predictable that Lehrer places the randomness outside of statistics and outside of science. Scientists are "learning more about the world" and the truth they've unearthed is that the truth is more slippery than they thought. Why? Because the world is more unstable than they thought, not because their methods guarantee an instability. What's great here is that, rather unsurprisingly from a certain point of view, statistical reasoning is discovering its own ontology in the world. And that is precisely what it means for something to be a priori. This "discovery" is the felicitous pairing of statistical ontology and actually existing large sets of data and the research apparatuses that build them.

There isn't any reason pure and simple that things are completely determinate but that we just don't have all the variables. In fact you get the impression in Lehrer's final paragraph that he actually thinks this, despite the fact that randomness is incarnate in the lab and the clinic.

I'm pretty sure I should drop everything and read Ian Hacking's books on the history of statistics.