The trouble with brain scans
The Guardian on Facebook
I posted this for the great opening line--even before I'd read the rest of it. But the rest is worth reading too.
One reader advised everybody to read Raymond Tallis, who calls these brain scan studies neurotrash: You can start here.
And also read Clear up this fuzzy thinking on brain scans by Oliver Oullier.
by Vaughan Bell:
Many of the methods on which brain scan studies are based have been flawed -- as one image of a dead salmon proved
Neuroscientists have long been banging their heads on their desks over exaggerated reports of brain scanning studies. Media stories illustrated with coloured scans, supposedly showing how the brain works, are now a standard part of the science pages and some people find them so convincing that they are touted as ways of designing education for our children, evaluating the effectiveness of marketing campaigns and testing potential recruits. Recently, to the chagrin of French scientists, politicians called for neuro-imaging to be used in the courts to decide on the guilt of criminals, after the technology made its dubious debut in the legal systems of India, Italy and the US.
This misplaced enthusiasm often stems from a misunderstanding about what brain scans tell us. The interpretation seems straightforward according to the popular press -- the coloured blobs represent a "pleasure centre", an "art centre" or perhaps a "love centre" -- but none of this is true.
All of our experiences and abilities rely on a distributed brain network and nothing relies on a single "centre". More than anything, the conclusions depend on the tasks volunteers undertake in the scanner and what each study tells us is limited. This small print has been repeated many times over by scientists. They bemoan how people misunderstand the subtleties and draw unwarranted conclusions. But now neuroscientists have had to come to terms with the fact that many of the methods on which brain scan studies are based have been flawed.
To understand where these flaws come from it's important to know something about how data from the most common technique, functional Magnetic Resonance Imaging or fMRI, is analysed. The scanner creates a 3D map of the brain split up into tens of thousands of tiny blocks called voxels (like pixels but for volume) and each has a value that describes blood flow √ĘÄ" used as a proxy for brain activity as more active areas need more oxygen. What you want to know is which bits of the brain are more active in certain tasks. Of course, the brain is changing all the time so scientists use statistics to check that changes in blood flow are due to the experimental tasks and not because of unrelated brain changes. The statistical problem is huge, however, as each scan has about 50,000 data points and thousands of scans are made in a single study.
When we're talking about millions of comparisons, a big problem is false positives. Imagine you are playing two roulette wheels. Clearly, the result of one doesn't affect the outcome of the other but sometimes they'll both come up with the same number just due to chance. Now imagine you have a roulette wheel for every point or voxel in the brain. A comparison of any two scans could look like some areas show linked activity when really there is no relationship. Ideally, the analysis should separate roulette wheels from genuine activity, but you may be surprised that hundreds if not thousands of studies have been conducted without such corrections. To illustrate the problem, Craig Bennett and his colleagues at the University of California did a spoof experiment on a dead salmon. The standard techniques showed "brain activity" in the deceased fish.
Further illustrating the issue, Edward Vul and Hal Pashler from the University of California showed that some researchers were producing conclusions by first picking out the best results and then seeing if there was a relationship between them. To return to our roulette analogy, it would be like discarding any results that weren't in the range of numbers 1-5 and then using only these selected results to see if any of the same numbers came up, something that is suddenly much more likely. A recent study by Anders Eklund and colleagues from Link√É¬∂ping University in Sweden found that they could find spurious "brain activity" related to non-existent tasks with standard settings on the most popular fMRI analysis software.
Recent advances have tried to control these problems but researchers have become much more cautious. "Our default attitude to any new and interesting fMRI finding should be scepticism," says Tal Yarkoni, a neuroscientist at the University of Colorado. "What's particularly problematic," he says, "is the amount of flexibility researchers have when performing their analyses√ĘÄ¬¶ you have no idea how many things the researchers tried before they got something to work." Psychologist Russ Poldrack, from the University of Texas, who has been at the forefront of correcting these issues, also highlights cultural issues. This flexible approach "also includes methods that are known by experts to be invalid, but unfortunately these still get into top journals, which only helps perpetuate them". Yarkoni explains that "researchers have a big incentive to come up with exciting new findings", meaning scientists are motivated to "torture" the data and journals are attracted by the media-friendly results.
In this light of this, stories about the discovery of "brain centres" fall flat and efforts to base public policy on brain scans become nothing short of ridiculous. But perhaps the most important problem is not that brain scans can be misleading, but that they are beautiful. Like all other neuroscientists, I find them beguiling. They have us enchanted and we are far from breaking their spell.
INDEX OF RESEARCH THAT COUNTS