Lies about Brain Scans / Dead Salmon

I have posted often about the false claims of scientific reliability and experimental rigor on the part of the Psych. Industry. I’m not alone.

“The low statistical power and the imperative to publish incentivizes researchers to mine their data to try to find something meaningful,” says Chris Chambers, a professor of cognitive neuroscience at the University of Cardiff. “That’s a huge problem for the credibility and integrity of the field.”

What credibility?

untitledsc untitledscan

BOLD Assumptions: Why Brain Scans Are Not Always What They Seem

Moheb Costandi, on DECODER

In 2009, researchers at the University of California, Santa Barbara performed a curious experiment. In many ways, it was routine — they placed a subject in the brain scanner, displayed some images, and monitored how the subject’s brain responded. The measured brain activity showed up on the scans as red hot spots, like many other neuroimaging studies.

Except that this time, the subject was an Atlantic salmon, and it was dead.

Dead fish do not normally exhibit any kind of brain activity, of course. The study was a tongue-in-cheek reminder of the problems with brain scanning studies. Those colorful images of the human brain found in virtually all news media may have captivated the imagination of the public, but they have also been subject of controversy among scientists over the past decade or so. In fact, neuro-imagers are now debating how reliable brain scanning studies actually are, and are still mostly in the dark about exactly what it means when they see some part of the brain “light up.”

Glitches in reasoning

Functional magnetic resonance imaging (fMRI) measures brain activity indirectly by detecting changes in the flow of oxygen-rich blood, or the blood oxygen-level dependent (BOLD) signal, with its powerful magnets. The assumption is that areas receiving an extra supply of blood during a task have become more active. Typically, researchers would home in on one or a few “regions of interest,” using ‘voxels,’ tiny cube-shaped chunks of brain tissue containing several million neurons, as their units of measurement.

Early fMRI studies involved scanning participants’ brains while they performed some mental task, in order to identify the brain regions activated during the task. Hundreds of such studies were published in the first half of the last decade, many of them garnering attention from the mass media.

Eventually, critics pointed out a logical fallacy in how some of these studies were interpreted. For example, researchers may find that an area of the brain is activated when people perform a certain task. To explain this, they may look up previous studies on that brain area, and conclude that whatever function it is reported to have also underlies the current task.

Among many examples of such studies were those that concluded people get satisfaction from punishing rule-breaking individuals, and that for mice, pup suckling is more rewarding than cocaine. In perhaps one of the most famous examples, a researcher diagnosed himself as a psychopath by looking at his own brain scan.

These conclusions could well be true, but they could also be completely wrong, because the area observed to be active most likely has other functions, and could serve a different role than that observed in previous studies.

The brain is not composed of discrete specialized regions. Rather, it’s a complex network of interconnected nodes, which cooperate to generate behavior. Thus, critics dismissed fMRI as “neo-phrenology” – after the discredited nineteenth century pseudoscience that purported to determine a person’s character and mental abilities from the shape of their skull – and disparagingly referred to it as ‘blobology.’

When results magically appear out of thin air

In 2009, a damning critique of fMRI appeared in the journal Perspectives on Psychological Science. Initially titled “Voodoo Correlations in Social Neuroscience” and later retitled to “Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition,” the article questioned the statistical methods used by neuro-imagers. The authors, Ed Vul of University of California in San Diego and his colleagues, examined a handful of social cognitive neuroscience studies, and pointed out that their statistical analyses gave impossibly high correlations between brain activity and behavior.

“It certainly created controversy,” says Tal Yarkoni, an assistant professor in the Department of Psychology at the University of Texas, Austin. “The people who felt themselves to be the target ignored the criticism and focused on the tone, but I think a large subset of the neuroimaging community paid it some lip service.”

Russ Poldrack of the Department of Psychology at Stanford University says that although the problem was more widespread than the paper suggested, many neuro-imagers were already aware of it. They happened to pick on one part of the literature, but almost everybody was doing it,” he says.


The problem arises from the “circular” nature of the data analysis, Poldrack says. “We usually analyze a couple of hundred thousand voxels in a study,” he says. “When you do that many statistical tests, you look for the ones that are significant, and then choose those to analyze further, but they’ll have high correlations by virtue of the fact that you selected them in the first place.”

Not long after Vul’s paper was published, Craig Bennett and his colleagues published their dead salmon study to demonstrate how robust statistical analyses are key to interpreting fMRI data. When stats are not done well enough, researchers can easily get false positive results – or see an effect that isn’t actually there, such as activity in the brain of a dead fish.

The rise of virtual superlabs

The criticisms drove researchers to do better work— to think more deeply about their data, avoid logical fallacies in interpreting their results, and develop new analytical methods.

At the heart of the matter is the concept of statistical power, which reflects how likely the results are to be meaningful instead of being obtained by pure chance. Smaller studies typically have lower power. An analysis published in 2013 showed that underpowered studies are common in almost every area of brain research. This is specially the case in neuroimaging studies, because most of them involve small numbers of participants.

“Ten years ago I was willing to publish papers showing correlations between brain activity and behavior in just 20 people,” says Poldrack. “Now I wouldn’t publish a study that doesn’t involve at least 50 subjects, or maybe 100, depending on the effect. A lot of other labs have come around to this idea.”

Cost is one of the big barriers preventing researchers from increasing the size of their studies. “Neuroimaging is very expensive. Every lab has a budget and a researcher isn’t going to throw away his entire year’s budget on a single study. Most of the time, there’s no real incentive to do the right thing,” Yarkoni says.

Replication – or repeating experiments to see if the same results are obtained – also gives researchers more confidence in their results. But most journals are unwilling to publish replication experiments, preferring novel findings instead, and the act of repeating someone else’s experiments is seen as aggressive, as if implying they were not done properly in the first place. Confirmation by repeat experiments is vital to the scientific method! This “unwillingness” is a SOCIAL IMPOSITION on the validity of scientific inquiry. We wouldn’t want to hurt the feelings of the researchers, would we? But no one cares about the consequences to the public!

One way around these problems is for research teams to collaborate with each other and pool their results to create larger data sets. One such initiative is the IMAGEN Consortium, which brings together neuro-imaging experts from 18 European research centers, to share their results, integrate them with genetic and behavioral data, and create a publicly available database.

Five years ago, Poldrack started the OpenfMRI project, which has similar aims. “The goal was to bring together data to answer questions that couldn’t be answered with individual data sets,” he says. “We’re interested in studying the psychological functions underlying multiple cognitive tasks, and the only way of doing that is to amass lots of data from lots of different tasks. It’s way too much for just one lab.”

An innovative way of publishing scientific studies, called pre-registration, could also increase the statistical power of fMRI studies. Traditionally, studies are published in scientific journals after they have been completed and peer-reviewed. Pre-registration requires that researchers submit their proposed experimental methods and analyses early on. If these meet the reviewers’ satisfaction, they are published; the researchers can then conduct the experiment and submit the results, which are eventually published alongside the methods.

“The low statistical power and the imperative to publish incentivizes researchers to mine their data to try to find something meaningful,” says Chris Chambers, a professor of cognitive neuroscience at the University of Cardiff. “That’s a huge problem for the credibility and integrity of the field.”

Chambers is an associate editor at Cortex, one of the first scientific journals to offer pre-registration. As well as demanding larger sample sizes, the format also encourages researchers to be more transparent about their methods.

Many fMRI studies would, however, not be accepted for pre-registration – their design would not stand up to the scrutiny of the first-stage reviewers. “Neuro-imagers say pre-registration consigns their field to a ghetto,” says Chambers. “I tell them they can collaborate with others to share data and get bigger samples.”

Pushing the field forward

Even robust and apparently straight-forward fMRI findings can still be difficult to interpret, because there are still unanswered questions about the nature of the BOLD signal. How exactly does the blood rush to a brain region? What factors affect it? What if greater activation in a brain area actually means the region is working less efficiently?

“What does it mean to say neurons are firing more in one condition than in another? We don’t really have a good handle on what to make of that,” says Yarkoni. “You end up in this uncomfortable situation where you can tell a plausible story no matter what you see.”

To some extent, the problems neuro-imagers face are part of the scientific process, which involves continuously improving one’s methods and refining ideas in light of new evidence. When done properly, the method can be extremely powerful, as the ever-growing number of so-called “mind-reading” and “decoding” studies clearly show.

_____________________________my comment:

That’s just great! In the meantime, hundreds of thousands of children and adults have been “diagnosed” as having abnormal brains and developmental disorders, as well as numerous “mental illnesses” by charlatans, in the “caring, helping, fixing” industry – people who continue to acquire obscene profits at the expense of parents and children who are the targets of borderline “eugenic” activity.


It’s likely that with incremental improvements in the technology, fMRI results will become more accurate and reliable. In addition, there are a number of newer projects that aim to find other ways to capture brain activity. For example, one group at Massachusetts General Hospital is working on using paramagnetic nanoparticles to detect changes in blood volume in the brain’s capillaries. Such a method would radically enhance the quality of signals and make it possible to detect brain activity in one individual, as opposed to fMRI that requires pooling data from a number of people, according to the researchers. Other scientists are diving even deeper, using paramagnetic chemicals to reveal brain activity at the cell level. If such methods come to fruition, we could find the subtlest activities in the brain, maybe just not in a dead fish.



2 thoughts on “Lies about Brain Scans / Dead Salmon

  1. This is looking a bit too much like ‘social-dominance BS gamesmanship – or as ‘the gray fox’ puts it so well -“bogus hierarchical crap”.

    Note that the person referred to has as a title that creature’s Latin name – a name which escapes me at the moment (as to its precise spelling – it’s something like ‘eurycyon’ or similar )

    So there is little-to-none in the way of #science# being done. Normdom is just fine with that; learning was never the goal. What IS the goal is gaining minds are in the larger miles – and as one approaches godhead in Normdom’s instinctual apprehension, one is more likely to be seen as professing ‘received’ information – stuff that needs no proof whatsoever.

    It will be bought – no sales talk needed – as if it came from the head deity of Normdom’s instinct-blessed hardwired pantheon. (Philemon, perhaps – C. Jung’s red book?)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s