Thyroid Gland / Important Anatomy for Everyone

I’m currently interested in phony “psychological symptoms” that actually have a specific physical origin; that is, the symptoms are not “mental or emotional” and therefore can only be treated medically – psych mumbo-jumbo has no effect; psych “drugs” may actually exacerbate symptoms and cause further damage and side effects that add to, obscure, or complicate existing conditions. (How well I know this from experience!!) For Asperger and ASD people, this is extremely important: Anxiety is a major problem, but little is known about WHY this is so: anxiety is a pan-human reaction to “outside” social stimulus, environmental triggers, internalized trauma AND physiological “errors” or disease. We must know the origin of excessive anxiety in individuals in order to effectively reduce the debilitating levels many of us live with daily or episodically. 

From: endocrineweb.com 

How Your Thyroid Works

Controlling hormones essential to your metabolism

Your thyroid gland is a small gland, normally weighing less than one ounce, located in the front of the neck. It is made up of two halves, called lobes, that lie along the windpipe (trachea) and are joined together by a narrow band of thyroid tissue, known as the isthmus.

Thyroid has two lobes and an isthmus.

The thyroid is situated just below your “Adams apple” or larynx. During development (inside the womb) the thyroid gland originates in the back of the tongue, but it normally migrates to the front of the neck before birth. Sometimes it fails to migrate properly and is located high in the neck or even in the back of the tongue (lingual thyroid). This is very rare. At other times it may migrate too far and ends up in the chest (this is also rare).

Iodine + Tyrosine=T3 and T4.

The function of the thyroid gland is to take iodine, found in many foods, and convert it into thyroid hormones: thyroxine (T4) and triiodothyronine (T3). Thyroid cells are the only cells in the body which can absorb iodine. These cells combine iodine and the amino acid tyrosine to make T3 and T4. T3 and T4 are then released into the blood stream and are transported throughout the body where they control metabolism (conversion of oxygen and calories to energy).

Every cell in the body depends upon thyroid hormones for regulation of their metabolism. The normal thyroid gland produces about 80% T4 and about 20% T3, however, T3 possesses about four times the hormone “strength” as T4.

/hypothalamus secretes TRH, Pituitary secretes TSH, Thyroid secretes T3 and T4.The thyroid gland is under the control of the pituitary gland, a small gland the size of a peanut at the base of the brain (shown here in orange). When the level of thyroid hormones (T3 & T4) drops too low, the pituitary gland produces Thyroid Stimulating Hormone (TSH) which stimulates the thyroid gland to produce more hormones. Under the influence of TSH, the thyroid will manufacture and secrete T3 and T4 thereby raising their blood levels.

The pituitary senses this and responds by decreasing its TSH production. One can imagine the thyroid gland as a furnace and the pituitary gland as the thermostat.

Thyroid hormones are like heat. When the heat gets back to the thermostat, (awkward!) it turns the thermostat off. As the room cools (the thyroid hormone levels drop), the thermostat turns back on (TSH increases) and the furnace produces more heat (thyroid hormones). The pituitary gland itself is regulated by another gland, known as the hypothalamus (shown in the picture above in light blue). The hypothalamus is part of the brain and produces TSH Releasing Hormone (TRH) which tells the pituitary gland to stimulate the thyroid gland (release TSH). One might imagine the hypothalamus as the person who regulates the thermostat since it tells the pituitary gland at what level the thyroid should be set.

The thyroid’s hormones regulate vital body functions, including:

  • Breathing
  • Heart rate
  • Central and peripheral nervous systems
  • Body weight
  • Muscle strength
  • Menstrual cycles
  • Body temperature
  • Cholesterol levels
  • Much more!

Thyroid Anatomy Tour: click here to access 2nd article

How the Thyroid Gland Works
The thyroid is part of the endocrine system, which is made up of glands that produce, store, and release hormones into the bloodstream so the hormones can reach the body’s cells. The thyroid gland uses iodine from the foods you eat to make two main hormones:

  • Triiodothyronine (T3)
  • Thyroxine (T4)
It is important that T3 and T4 levels are neither too high nor too low. Two glands in the brain—the hypothalamus and the pituitary communicate to maintain T3 and T4 balance.

The hypothalamus produces TSH Releasing Hormone (TRH) that signals the pituitary to tell the thyroid gland to produce more or less of T3 and T4 by either increasing or decreasing the release of a hormone called thyroid stimulating hormone (TSH).

  • When T3 and T4 levels are low in the blood, the pituitary gland releases more TSH to tell the thyroid gland to produce more thyroid hormones.
  • If T3 and T4 levels are high, the pituitary gland releases less TSH to the thyroid gland to slow production of these hormones.

Why You Need a Thyroid Gland
T3 and T4 travel in your bloodstream to reach almost every cell in the body. The hormones regulate the speed with which the cells/metabolism work. For example, T3 and T4 regulate your heart rate and how fast your intestines process food. So if T3 and T4 levels are low, your heart rate may be slower than normal, and you may have constipation/weight gain. If T3 and T4 levels are high, you may have a rapid heart rate and diarrhea/weight loss.

Listed below are other symptoms of too much T3 and T4 in your body (hyperthyroidism):

  • Anxiety
  • Irritability or moodiness
  • Nervousness, hyperactivity
  • Sweating or sensitivity to high temperatures
  • Hand trembling (shaking)
  • Hair loss
  • Missed or light menstrual periods

The following is other symptoms of too little T3 and T4 in your body (hypothyroidism):

  • Trouble sleeping
  • Tiredness and fatigue
  • Difficulty concentrating
  • Dry skin and hair
  • Depression
  • Sensitivity to cold temperature
  • Frequent, heavy periods
  • Joint and muscle pain
Advertisements

Thyroid Hormones / Brain Development

Endocrine System > Thyroid and Parathyroid Glands

Thyroid Hormones: Pregnancy and Fetal Development

Thyroid hormones are critical for development of the fetal and neonatal brain, as well as for many other aspects of pregnancy and fetal growth. Hypothyroidism in either the mother or fetus frequently results in fetal disease; in humans, this includes a high incidence of mental retardation.

Maternal Thyroid Function During Pregnancy

Normal pregnancy entails substantial changes in thyroid function in all animals. These phenomena have been studied most extensively in humans, but probably are similar in all mammals. Major alterations in the thyroid system during pregnancy include:

  • Increased blood concentrations of T4-binding globulin: TBG is one of several proteins that transport thyroid hormones in blood, and has the highest affinity for T4 (thyroxine) of the group. Estrogens stimulate expression of TBG in liver, and the normal rise in estrogen during pregnancy induces roughly a doubling in serum TBG concentratrations.
  • Increased levels of TBG lead to lowered free T4 concentrations, which results in elevated TSH secretion by the pituitary and, consequently, enhanced production and secretion of thyroid hormones. The net effect of elevated TBG synthesis is to force a new equilibrium between free and bound thyroid hormones and thus a significant increase in total T4 and T3 levels. The increased demand for thyroid hormones is reached by about 20 weeks of gestation and persists until term.
  • Increased demand for iodine: This results from a significant pregnancy-associated increase in iodide clearance by the kidney (due to increased glomerular filtration rate), and siphoning of maternal iodide by the fetus. The World Health Organization recommends increasing iodine intake from the standard 100 to 150 ug/day to at least 200 ug/day during pregnancy.
  • Thyroid stimulation by chorionic gonadotropin: The placentae of humans and other primates secrete huge amounts of a hormone called chorionic gonadotropin (in the case of humans, human chorionic gonadotropin or hCG) which is very closely related to luteinizing hormone. TSH and hCG are similar enough that hCG can bind and transduce signalling from the TSH receptor on thyroid epithelial cells. Toward the end of the first trimester of pregnancy in humans, when hCG levels are highest, a significant fraction of the thyroid-stimulating activity is from hCG. During this time, blood levels of TSH often are suppressed, as depicted in the figure to the right. The thyroid-stimulating activity of hCG actually causes some women to develop transient hyperthyroidism.

The net effect of pregnancy is an increased demand on the thyroid gland. In the normal individuals, this does not appear to represent much of a load to the thyroid gland, but in females with subclinical hypothyroidism, the extra demands of pregnancy can precipitate clinicial disease.

Thyroid Hormones and Fetal Brain Development

In 1888 the Clinical Society of London issued a report underlining the importance of normal thyroid function on development of the brain. Since that time, numerous studies with rats, sheep and humans have reinforced this concept, usually by study of the effects of fetal and/or maternal thyroid deficiency. Thyroid hormones appear to have their most profound effects on the terminal stages of brain differentiation, including synaptogenesis, growth of dendrites and axons, myelination and neuronal migration.

Thyroid hormones act by binding to nuclear receptors and modulating transcription of responsive genes. Thyroid hormone receptors are widely distributed in the fetal brain, and present prior to the time the fetus is able to synthesize thyroid hormones. It has proven surprisingly difficult to identify the molecular targets for thyroid hormone action in the developing brain, but some progress has been made. For example, the promoter of the myelin basic protein gene is directly responsive to thyroid hormones and contains the expected hormone response element. This fits with the observation that induced hypothyroidism in rats leads to diminished synthesis of mRNAs for several myelin-associated proteins.

It seems clear that there is a great deal more to learn about the molecular mechanisms by which thyroid hormones support normal development of the brain.

Thyroid Deficiency in the Fetus and Neonate

The fetus has two potential sources of thyroid hormones – it’s own thyroid and the thyroid of it’s mother. Human fetuses acquire the ability to synthesize thyroid hormones at roughly 12 weeks of gestation, and fetuses from other species at developmentally similar times. Current evidence from several species indicates that there is substantial transfer of maternal thyroid hormones across the placenta. Additionally, the placenta contains deiodinases that can convert T4 to T3.

There are three types or combinations of thyroid deficiency states known to impact fetal development:

Isolated maternal hypothyroidism: Overt maternal hypothyroidism typically is not a significant cause of fetal disease because it usually is associated with infertility. (How does this affect infertile women who become pregnant using donor embryo implantation?) When pregnancy does occur, there is increased risk of intrauterine fetal death and gestational hypertension. Subclincial hypothyroidism is increasingly being recognized as a cause of developmental disease – this is a rather scary situation. Several investigators have found that mild maternal hypothyroidism, diagnosed only retrospectively from banked serum, may adversely affect the fetus, leading in children to such effects as slightly lower performance on IQ tests and difficulties with schoolwork. The most common cause of subclinical hypothyroidism is autoimmune disease, and it is known that anti-thyroid antibodies cross the human placenta. Thus, the cause of this disorder may be a passive immune attack on the fetal thyroid gland.

Isolated fetal hypothyroidism: This condition is also known as sporadic congenital hypothyroidism. It is due to failure of the fetal thyroid gland to produce adequate amounts of thyroid hormone. Most children with this disorder are normal at birth, because maternal thyroid hormones are transported across the placenta during gestation. What is absolutely critical is to identify and treat this condition very shortly after birth. If treatment is not instituted quickly, the child will become permanently mentally and growth retarded – a disorder called cretinism. This problem has largely disappeared in the US and many other countries due to large scale screening programs to detect hypothyroid infants.

Iodine deficiency – Combined maternal and fetal hypothyroidism: Iodine deficiency is, by a large margin, the most common preventable cause of mental retardation in the world. Without adequate maternal iodine intake, both the fetus and mother are hypothyroid, and if supplemental iodine is not provided, the child may well develop cretinism, with mental retardation, deaf-mutism and spasticity.

The World Health Organization estimated in 1990 that 20 million people had some degree of brain damage due to iodine deficiency experienced in fetal life. Endemic iodine deficiency remains a substantial public health problem in many parts of the world, including many areas in Europe, Asia, Africa and South America. In areas of severe deficiency, a large fraction of the adult population may show goiters. In such settings, overt cretinism may occur in 5 to 10 percent of offspring, and perhaps five times that many children will have mild mental retardation. This is a serious, tragic and, most importantly, a preventable problem.

The effects of mild maternal hypothyroidism on cognitive function of children has been evaluated in several studies, including some in which mothers will low levels of T4 or high levels of TSH were treated prophylactically with thyroid supplementation. The results of these studies are somewhat divergent, and the benefit of routinely testing pregnant women and treating those with suspected thyroid deficiency remains unsettled.

The fetus of an iodine-deficient mother can be successfully treated if iodine supplementation is given during the first or second trimester. Treatment during the third trimester or after birth will not prevent the mental defects.

Iodine deficiency can also be a sigificant problem in animal populations. The most common manifestation in sheep, cattle, pigs and horses is a high incidence of stillbirths and birth of small, weak offspring.

Hyperthyroidism in Pregnancy

Gestational hyperthyroidism is associated with increased risk of several adverse outcomes, including preeclampisa, premature labor, fetal or perinatal death and low birth weight. In humans, hyperthyroidism usually is the result of Grave’s disease, which involves development of autoantibodies against the TSH receptor that stimulate the thyroid gland.

Extraverted – Introverted Thinking / Ask C.G. Jung

Hmmm.. back to the library after 3 days with no access to the Internet; interesting experience. Anyway – had to go old school – actual books, pen and paper. Very productive, if frustrating. I’ve been meaning to get back to a question on my mind: What did Jung actually mean by extraverted and introverted thinking?

My suspicion was that most of us are using these terms wrongly, and confusing related terms such as intuition, instinct, “gut feeling” “sense of” “hunch” – a quick inspection of The Portable Jung, Viking Press, 1972 (one of those reference books I keep close), confirmed that indeed, my “memory” of these ideas and others was somewhat confused.  Also, I had not reviewed the subject in light of what I now know about Asperger’s – and found that Jung’s ideas have new importance.

Remember: the following is extraversion and introversion applied to THINKING ONLY, not to the personality as a whole.

I will begin with one quote: (page 197, should you have a copy) regarding extraverted thinking:

“…but when the thinking depends primarily not on objective data but on some second- hand idea, the very poverty of this thinking is compensated by an all the more impressive accumulation of facts (or data) congregating round a narrow and sterile point of view, with the result that many valuable and meaningful aspects are completely lost sight of. Many of the allegedly scientific outpourings of our own day owe their existence to this wrong orientation.”

Pretty prescient warning for someone writing nearly a century ago, and including his own profession!

Jung is not condemning extraverted thinking here – far from it, but is warning against it’s mistaken or perverted use in areas that are properly the domain of introverted thinking.

A definition: The general attitude of extraverted thinking is oriented by the object and objective data.

A definition: Introverted thinking is neither directed at objective facts nor general ideas. He asks – “Is this even thinking?” This has significant application to the “Asperger” brain problem – Jung seems to have been peripherally aware of “visual thinking” in dream imagery and symbols in art and alchemy, and yet unable to “see” visual thinking as a distinct brain process, and its importance.

His admission is that both types of thinking are vital to each other, and that the failure of “our age” is that modern western culture “only acknowledges extraverted thinking” – the failure is to recognize that introverted thinking (basically, reflection on personal subjective experience) cannot be “removed” from human thought – nor should it be, because only this co-operative analysis can yield actionable meaning.

He rightly identifies the “problem” of modern “social – psychological” science as a not-really-scientific endeavor, because it does not deal with fact, but with traditional, common, banal ideas – as its “outside sources” – (Biblical Myth, Puritanical social order, etc) and inevitably, simply supports the status quo: it is “purely imitative”, an “afterthought”; repeating “sterile” ideas that cannot go beyond was was obvious to begin with. A “materialistic mentality stuck on the object” that produces a “mass of undigested material” that requires “some simple, general idea that gives coherence to a disordered whole.”

Is this not exactly, in post after post, what my repeated criticism of today’s “helping, caring, fixing” industry has been? YES!

Much more to come…..

How Animals Think / Review of Book by Frans de Waal

How Animals Think

A new look at what humans can learn from nonhuman minds

Alison Gopnik, The Atlantic 

Review of: Are We Smart Enough to Know How Smart Animals Are?

By Frans de Waal / Norton

For 2,000 years, there was an intuitive, elegant, compelling picture of how the world worked. It was called “the ladder of nature.” In the canonical version, God was at the top, followed by angels, who were followed by humans. Then came the animals, starting with noble wild beasts and descending to domestic animals and insects. Human animals followed the scheme, too. Women ranked lower than men, and children were beneath them. The ladder of nature was a scientific picture, but it was also a moral and political one. It was only natural that creatures higher up would have dominion over those lower down. (This view remains dominant in American thinking: “The Great Chain of Being” is still with us and underlies social reality)

Darwin’s theory of evolution by natural selection delivered a serious blow to this conception. (Unless one denies evolution)  Natural selection is a blind historical process, stripped of moral hierarchy. A cockroach is just as well adapted to its environment as I am to mine. In fact, the bug may be better adapted—cockroaches have been around a lot longer than humans have, and may well survive after we are gone. But the very word evolution can imply a progression—New Agers talk about becoming “more evolved”—and in the 19th century, it was still common to translate evolutionary ideas into ladder-of-nature terms.

MAN ILLUS

Modern biological science has in principle rejected the ladder of nature. But the intuitive picture is still powerful. In particular, the idea that children and nonhuman animals are lesser beings has been surprisingly persistent. Even scientists often act as if children and animals are defective adult humans, defined by the abilities we have and they don’t. Neuroscientists, for example, sometimes compare brain-damaged adults to children and animals.

We always should have been suspicious of this picture, but now we have no excuse for continuing with it. In the past 30 years, research has explored the distinctive ways in which children as well as animals think, and the discoveries deal the coup de grâce to the ladder of nature. (Not in psychology!)The primatologist Frans de Waal has been at the forefront of the animal research, and its most important public voice.

In Are We Smart Enough to Know How Smart Animals Are?, he makes a passionate and convincing case for the sophistication of nonhuman minds.

De Waal outlines both the exciting new results and the troubled history of the field. The study of animal minds was long divided between what are sometimes called “scoffers” and “boosters.” Scoffers refused to acknowledge that animals could think at all: Behaviorism—the idea that scientists shouldn’t talk about minds, only about stimuli and responses—stuck around in animal research long after it had been discredited in the rest of psychology. (Are you kidding? “Black Box” psychology is alive and well, especially in American education!) Boosters often relied on anecdotes and anthropomorphism instead of experiments. De Waal notes that there isn’t even a good general name for the new field of research. Animal cognition ignores the fact that humans are animals too. De Waal argues for evolutionary cognition instead.

Psychologists often assume that there is a special cognitive ability—a psychological secret sauce—that makes humans different from other animals. The list of candidates is long: tool use, cultural transmission, the ability to imagine the future or to understand other minds, and so on. But every one of these abilities shows up in at least some other species in at least some form. De Waal points out various examples, and there are many more. New Caledonian crows make elaborate tools, shaping branches into pointed, barbed termite-extraction devices. A few Japanese macaques learned to wash sweet potatoes and even to dip them in the sea to make them more salty, and passed that technique on to subsequent generations. Western scrub jays “cache”—they hide food for later use—and studies have shown that they anticipate what they will need in the future, rather than acting on what they need now.

From an evolutionary perspective, it makes sense that these human abilities also appear in other species. After all, the whole point of natural selection is that small variations among existing organisms can eventually give rise to new species. Our hands and hips and those of our primate relatives gradually diverged from the hands and hips of common ancestors. It’s not that we miraculously grew hands and hips and other animals didn’t. So why would we alone possess some distinctive cognitive skill that no other species has in any form?

De Waal explicitly rejects the idea that there is some hierarchy of cognitive abilities. (Thank-you!) Nevertheless, an implicit tension in his book shows just how seductive the ladder-of-nature view remains. Simply saying that the “lower” creatures share abilities with creatures once considered more advanced still suggests something like a ladder—it’s just that chimps or crows or children are higher up than we thought. So the summary of the research ends up being: We used to think that only adult humans could use tools/participate in culture/imagine the future/understand other minds, but actually chimpanzees/crows/toddlers can too. Much of de Waal’s book has this flavor, though I can’t really blame him, since developmental psychologists like me have been guilty of the same rhetoric.

As de Waal recognizes, a better way to think about other creatures would be to ask ourselves how different species have developed different kinds of minds to solve different adaptive problems. (And – How “different humans” have done, and continue to do, the same!) Surely the important question is not whether an octopus or a crow can do the same things a human can, but how those animals solve the cognitive problems they face, like how to imitate the sea floor or make a tool with their beak. Children and chimps and crows and octopuses are ultimately so interesting not because they are mini-mes, but because they are aliens—not because they are smart like us, but because they are smart in ways we haven’t even considered. All children, for example, pretend with a zeal that seems positively crazy; if we saw a grown-up act like every 3-year-old does, we would get him to check his meds. (WOW! Nasty comment!)

Sometimes studying those alien ways of knowing can illuminate adult-human cognition. Children’s pretend play may help us understand our adult taste for fiction. De Waal’s research provides another compelling example. We human beings tend to think that our social relationships are rooted in our perceptions, beliefs, and desires, and our understanding of the perceptions, beliefs, and desires of others—what psychologists call our “theory of mind.” (And yet horrible behavior toward other humans and animals demonstrates that AT BEST, this “mind-reading” simply makes humans better social manipulators and predators) human behavior our In the ’80s and ’90s, developmental psychologists, including me, showed that preschoolers and even infants understand minds apart from their own. But it was hard to show that other animals did the same. “Theory of mind” became a candidate for the special, uniquely human trick. (A social conceit)

Yet de Waal’s studies show that chimps possess a remarkably developed political intelligence—they are profoundly interested in figuring out social relationships such as status and alliances. (A primatologist friend told me that even before they could stand, the baby chimps he studied would use dominance displays to try to intimidate one another.) It turns out, as de Waal describes, that chimps do infer something about what other chimps see. But experimental studies also suggest that this happens only in a competitive political context. The evolutionary anthropologist Brian Hare and his colleagues gave a subordinate chimp a choice between pieces of food that a dominant chimp had seen hidden and other pieces it had not seen hidden. The subordinate chimp, who watched all the hiding, stayed away from the food the dominant chimp had seen, but took the food it hadn’t seen. (A typical anecdotal factoid that proves nothing)

Anyone who has gone to an academic conference will recognize that we, too, are profoundly political creatures. We may say that we sign up because we’re eager to find out what our fellow Homo sapiens think, but we’re just as interested in who’s on top and where the alliances lie. Many of the political judgments we make there don’t have much to do with our theory of mind. We may defer to a celebrity-academic silverback even if we have no respect for his ideas. In Jane Austen, Elizabeth Bennet cares how people think, while Lady Catherine cares only about how powerful they are, but both characters are equally smart and equally human.

The challenge of studying creatures that are so different from us is to get into their heads.

Of course, we know that humans are political, but we still often assume that our political actions come from thinking about beliefs and desires. Even in election season we assume that voters figure out who will enact the policies they want, and we’re surprised when it turns out that they care more about who belongs to their group or who is the top dog. The chimps may give us an insight into a kind of sophisticated and abstract social cognition that is very different from theory of mind—an intuitive sociology rather than an intuitive psychology.

Until recently, however, there wasn’t much research into how humans develop and deploy this kind of political knowledge—a domain where other animals may be more cognitively attuned than we are. It may be that we understand the social world in terms of dominance and alliance, like chimps, but we’re just not usually as politically motivated as they are. (Obsession with social status is so pervasive, that it DISRUPTS neurotypical ability to function!) Instead of asking whether we have a better everyday theory of mind, we might wonder whether they have a better everyday theory of politics.

Thinking seriously about evolutionary cognition may also help us stop looking for a single magic ingredient that explains how human intelligence emerged. De Waal’s book inevitably raises a puzzling question. After all, I’m a modern adult human being, writing this essay surrounded by furniture, books, computers, art, and music—I really do live in a world that is profoundly different from the world of the most brilliant of bonobos. If primates have the same cognitive capacities we do, where do those differences come from?

The old evolutionary-psychology movement argued that we had very specific “modules,” special mental devices, that other primates didn’t have. But it’s far likelier that humans and other primates started out with relatively minor variations in more-general endowments and that those variations have been amplified over the millennia by feedback processes. For example, small initial differences in what biologists call “life history” can have big cumulative effects. Humans have a much longer childhood than other primates do. Young chimps gather as much food as they consume by the time they’re 5. Even in forager societies, human kids don’t do that until they’re 15. This makes being a human parent especially demanding. But it also gives human children much more time to learn—in particular, to learn from the previous generation. (If that generation is “messed up” to the point of incompetence, the advantage disappears and disaster results – which is what we see in the U.S. today). Other animals can absorb culture from their forebears too, like those macaques with their proto-Pringle salty potatoes. But they may have less opportunity and motivation to exercise these abilities than we do.

Even if the differences between us and our nearest animal relatives are quantitative rather than qualitative—a matter of dialing up some cognitive capacities and downplaying others—they can have a dramatic impact overall. A small variation in how much you rely on theory of mind to understand others as opposed to relying on a theory of status and alliances can exert a large influence in the long run of biological and cultural evolution.

Finally, de Waal’s book prompts some interesting questions about how emotion and reason mix in the scientific enterprise. The quest to understand the minds of animals and children has been a remarkable scientific success story. It inevitably has a moral, and even political, dimension as well. The challenge of studying creatures that are so different from us is to get into their heads, to imagine what it is like to be a bat or a bonobo or a baby. A tremendous amount of sheer scientific ingenuity is required to figure out how to ask animals or children what they think in their language instead of in ours.

At the same time, it also helps to have a sympathy for the creatures you study, a feeling that is not far removed from love. And this sympathy is bound to lead to indignation when those creatures are dismissed or diminished. That response certainly seems justified when you consider the havoc that the ladder-of-nature picture has wrought on the “lower” creatures. (Just ask ASD and Asperger children how devastating this lack of “empathy” on the part of the “helping, caring fixing” industry is.)

But does love lead us to the most-profound insights about another being, or the most-profound illusions? Elizabeth Bennet and Lady Catherine would have differed on that too, and despite all our theory-of-mind brilliance, (sorry – that’s ridiculous optimism) we humans have yet to figure out when love enlightens and when it leads us astray. So we keep these emotions under wraps in our scientific papers, for good reason. Still, popular books are different, and both sympathy and indignation are in abundant supply in de Waal’s.

Perhaps the combination of scientific research and moral sentiment can point us to a different metaphor for our place in nature. Instead of a ladder, we could invoke the 19th-century naturalist Alexander von Humboldt’s web of life. We humans aren’t precariously balanced on the top rung looking down at the rest. (Tell that to all those EuroAmerican males who dictate socio-economic-scientific terms of “humans who count”) It’s more scientifically accurate, and more morally appealing, to say that we are just one strand in an intricate network of living things.

About the Author

Alison Gopnik is a professor of psychology and an affiliate professor of philosophy at UC Berkeley.

Days of Relief / Ignoring the Social Condemnation of Asperger’s

The past few days I’ve been ignoring Asperger’s, the “social disease” as characterized by psychologists (and their misuse of “neuroscience” to “prove” their ugly prejudices) because I decided to finally revamp my blog (formerly Some People are Lost – now Miss America Gone Wrong) and have been taken back in time to a productive period, when I began to discover myself as a person that I could like.

MAGW is important to me because it was written (1991-1992) when I didn’t know that the “condition” existed. Asperger’s was “created” around that time, and until very recently, females were excluded, mainly because male psychologists (and most males) dismiss females when it comes to “brain abilities” in engineering, math and the sciences. Women can be “biology types” because – they have uteruses. Ironically, most psychologists are female today, which is not a “compliment” to the field. Whenever a job category is overtaken by women, it means that the field has lost status and that the pay scale has dropped.

In 1991 I was in graduate school, serving time in the academic Gulag run by male assholes. It’s that simple. I finally and totally rebelled over bad treatment, and frankly, the overt hatred of females that I’d “put up with” my entire life.

When I googled “recent research” in Asperger’s this morning, the same old crap appeared – an onslaught of “studies” that claim to prove that Asperger people are robotic deviants; fictitious claims that the “bounty hunters” are closing in on the brain defects and genetic mistakes that make us social outcasts.

No one seems to even raise the question as to why being “hyposocial” and intelligent is considered to be a state of pathology – literally a “social crime” being misrepresented as biological pathology.

Why must each and every Asperger-type individual begin life as a “broken” human? And, once labeled, no matter how well we manage to survive in a hostile social environment, we can never prove that we are a legitimate type of Homo sapiens. We are guilty, and remain guilty of a social crime, without the opportunity to prove our status as “part of” our species. We are literally considered to be lower than chimps, monkeys, rats and mice on the mystical supernatural and magical “empathy scale” – which somehow is granted the “new definition” of what is “required” to be considered a “real” human being.

My “escape” from social tyranny twenty-six years ago was fueled by disgust –  I had no intention other than relief for a few weeks before I again would have to take on survival in “American social reality”.

Surprise! It was the happiest time in my life. I began to uncover the “me” that was buried under a lifetime of “being told who I was” – and I liked the person who began to be revealed as I left behind the social order that classifies, defines and injures human beings. The people I met were often in the “same boat” (or RV, tent or car) as myself: refugees from a cruel and unjust economic and social system that had kicked them to the curb – and declared them to have no value.

What is disturbing, is that this system has grown in strength and callous brutality  over the past three decades.

 

 

 

 

 

Dept. of Defense / Autism Research Highlights

Most of these “select highlights” of funded research appear to be “serious” – but… a few have “curious” titles; and what might a complete list of “studies” reveal? There have been objections to the “duplication – redundancy” of Federally awarded grants with the same studies being funded over and over again by separate agencies.  Check out “anticipated funding opportunities” up to 1.3 million per grant.

Autism
Research Highlights/News

2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007

»  Look for consumer stories click here

2017

Press Releases:

Research Highlights:

2016

Press Releases:

Research Highlights:

News:

2015

Press Releases:

Research Highlights:

News:

2014

Press Releases:

Research Highlights:

News:

2013

Press Releases:

Research Highlights:

News:

2012

Press Releases:

Research Highlights:

News:

2011

Press Releases:

Research Highlights:

2010

Press Releases:

Research Highlights:

News:

2009

Press Releases:

Research Highlights:

News:

2008

Press Releases:

2007

Press Releases:

 

Federal Autism Research: GAO Report / More Questions than Answers!

Federal Autism Research:

Updated Information on Funding from Fiscal Years 2008 through 2012

GAO-15-583R: Published: Jun 30, 2015. Publicly Released: Jul 30, 2015

View Report (PDF, 20 pages) 

What GAO Found

Although federal funding for autism research fluctuated from fiscal years 2008 through 2012, it increased overall during this period, from approximately $169 million in fiscal year 2008 to $245 million in fiscal year 2012—about a 45 percent increase (about a 37 percent increase when adjusted for inflation to fiscal year 2012 dollars). Over this time period, the National Institutes of Health (NIH) consistently provided the majority of autism research funding—between about 76 and 83 percent of the total funding awarded each fiscal year. The highest funding levels were in fiscal years 2009 and 2010, in part, as a result of additional funds appropriated to NIH under the American Recovery and Reinvestment Act of 2009. While overall funding increased, federal funding varied by each of the seven research areas specified in the Interagency Autism Coordinating Committee’s (IACC) strategic plan. These research areas are biology, treatments and interventions, causes, diagnosis, infrastructure and surveillance, services, and lifespan issues. The following figure shows the changes in funding by fiscal year for each of the seven research areas, as well as the overall average annual percent change in funding for each research area.

Federal Funding and Average Annual Percent Change by Autism-Related Research Area from Fiscal Years (FY) 2008 through 2012

Federal Funding and Average Annual Percent Change by Autism-Related Research Area from Fiscal Years (FY) 2008 through 2012

The research areas noted in the figure were established by the Interagency Autism Coordinating Committee’s (IACC) strategic plan. The federal agencies that funded autism research during the time period are the Department of Defense; Department of Education; Environmental Protection Agency; National Science Foundation; and seven agencies within the Department of Health and Human Services: Administration for Children and Families, Agency for Healthcare Research and Quality, Centers for Disease Control and Prevention, Centers for Medicare & Medicaid Services, Health Resources and Services Administration, National Institutes of Health, and the Substance Abuse and Mental Health Services Administration. In this figure, all dollars are expressed in nominal terms.

___________________

Comment:

What is Infrastructure and Surveillance?

(See Fig. 1 in report: Interagency Autism Coordinating Committee…)

And why is the Department of Defense funding autism research?

Those percentages (%) written above the columns are “changes” in funding as an “annual average” – both increase and decrease – so research $$$$ into “causes” went down, as  well as $$$ for research into “lifespan issues”. This graph is not at all specific or informative, except that funding for “research into” actual SERVICES to ASD individuals and families is extremely low compared to a bonanza of funding for researchers, universities and the “autism industry”.

____________________________

Why GAO Did This Study

The Centers for Disease Control and Prevention (CDC) estimates that about 1 in 68 children have been identified as having autism—a developmental disorder involving communication and social impairment. According to CDC, there are likely many causes of autism and many factors, including environmental, biologic, and genetic, that may make a child more likely to have autism. There is no known cure for autism; however, research shows that early intervention can greatly improve a child’s development. From fiscal years 2008 through 2012, 11 federal agencies awarded approximately $1.2 billion to fund autism research.

GAO was asked to examine federal autism research funding. In this report, GAO describes how the amount of federal funding in each of the research areas specified in the IACC’s strategic plan changed from fiscal years 2008 through 2012. GAO analyzed data previously collected for GAO-14-16, Federal Autism Activities: Better Data and More Coordination Needed to Help Avoid the Potential for Unnecessary Duplication, including updated data, to identify changes in agency funding awarded from fiscal years 2008 through 2012. Data by strategic plan research area for fiscal years 2013 and 2014 are not currently available. To calculate the changes in federal autism funding awarded, GAO analyzed the data by IACC strategic plan research area, including any growth or decreases in each area by fiscal year and agency.

What GAO Recommends

GAO is not making any recommendations. GAO provided a draft of this report to the Department of Defense, Department of Education, the Environmental Protection Agency, Department of Health and Human Services, and the National Science Foundation. GAO received technical comments from the Department of Defense, Department of Education, Department of Health and Human Services, and the National Science Foundation, which GAO incorporated as appropriate. The Environmental Protection Agency did not provide any comments.

Meanwhile, I’m off to the Department of Defense / Military websites to see what’s going on…

Autism Expert / Isabelle Rapin Re-post

by Isabelle Rapin untitledrapin

published on SFARI website, 24 May 2011 
Full physical: Clinicians should test children for hearing impairments before they diagnose them with autism, cautions Isabelle Rapin.

__________________________________

Why is the clear-cut, yes-or-no diagnosis of an autism spectrum disorder so difficult?

There is much disagreement among experts about borderline cases of autism, precisely because it is behaviorally defined and lacks — and will continue to lack — a test that provides a definitive answer.

Autism is diagnosed based on the severity and variety of its symptoms — in other words, its symptoms mimic height, weight, blood pressure or blood sugar, which are dimensional. In each, the cut-off between yes and no, or affected and normal, is based on agreement among experts and not a specific test — such as an X-ray that shows a fractured bone. You or I would be diagnosed as giant, dwarfed, obese, hypertensive or diabetic depending on how far from average we are on these measures.

No one disagrees with or doubts the diagnosis if the measure is very far from average, but there is a wide gray area at the edges of both normality and disease. This is also the case for many other developmental disorders, such as language delay, attention deficit hyperactivity disorder and intellectual disability. This makes diagnosing autism very difficult and easy to confuse with these and other disorders.

Especially in the days before autism was all over the Internet and print media, parents who came for advice were most likely to report problems with language: Either their preschooler’s language was delayed, or it had ceased to progress, had deteriorated, or even disappeared. Parents told us that it was usually between the ages of 15 to 30 months — occasionally even earlier — that they had become aware of the delayed language, plateau or regression.

Awareness of this problem was rarely sudden, but had occurred over several weeks, so that it often escaped attention — sometimes for months. To make matters worse, pediatricians frequently reassured parents, telling them that development can progress in leaps and starts or that little boys are likely to speak later and not as clearly and eagerly as little girls do.

Often it was a nursery school teacher, a speech pathologist friend, a grandmother or the worried parents themselves who pushed for an investigation that brought them to my office. At the time, parents’ main worries were intellectual deficiency or a developmental disorder limited to language. I would bring up two additional concerns: hearing loss or an autism spectrum disorder (ASD).

A glaring example: Quite recently, I saw a lovely 3-year-old girl whose language was clearly deficient. Her nursery school teacher brought up the question of an ASD (this suspicion indicates how much better teachers are educated about autism than they were in the past). The child was very cute, intelligent and did not respond reliably when called by name, a frequent feature of children with autism. She related well to her parents and I was able to engage her in playing ball, neither of which is sufficient to rule out a diagnosis of ASD.

I spent a lot of time discussing diagnostic dilemmas, until I realized that I hadn’t asked whether the parents had an audiogram, or formal hearing test, to show me. The child had passed the newborn screen and the test had not yet been performed. The next phone call from this family reported that she had a significant hearing loss, especially for the high-pitched sounds critical for differentiating one consonant from another, and thus for language development.

As you can imagine, this cute little girl was promptly fitted with hearing aids, which she readily accepted, and the diagnosis of ASD flew out the window — her hearing loss is caused by an inner ear problem; it not currently severe enough to contemplate a cochlear implant, although we do not yet know whether it is progressive or not. The lesson is that prompt and competent hearing testing must be number one on the agenda for any child in whom there is a concern about language development, and must never be neglected. It must be performed even if the child appears to be hearing and needs to be repeated if language deteriorates.

Diagnostic difficulty: These days, Internet-savvy parents worry about autism but do not always tell me their concerns when they visit my office, because they want to hear my independent diagnosis. Let me emphasize that autism is a behavioral diagnosis for which there is no biological test.

The importance of genetics has come to the forefront. Well over 100 different genetic mutations and other chromosomal abnormalities are known. But the key diagnostic dilemma is that, with some exceptions, virtually all these same gene variants or conditions can be found in healthy people.

There is also no prenatal diagnosis for behaviorally-defined autism, only for some neurological conditions that may be associated with autism — such as fragile X syndrome and Rett syndrome, and a small number of severe metabolic illnesses such as phenylketonuria, and malformations such as Joubert syndrome.

In the majority of individuals with an ASD for whom there is no obvious physical or neurological abnormality, multiple different genes probably contribute to the clinical picture. Knowing about these genes is useful to scientists trying to understand some of the biochemical or metabolic differences in the cells of people with autism, and may enable them to develop effective, scientifically-based medications.

But these differences will not determine whether the diagnosis of autism is correct. They will not be able to predict whether individuals with these genes will definitely have the group of symptoms we call autism. Differences in cellular metabolism may indicate that a person is at higher risk for autism, but not necessarily that a behavioral difference will be severe enough for a diagnosis of ASD.

What to do: Most schools cannot deal with fuzzy dimensional diagnoses.

In order to assign Paul to an enclosed classroom with only eight children, a specialized teacher and aides, he needs to be categorically diagnosed as unequivocally having autism and being severely affected — regardless of the biological cause of his ASD. Another child on the spectrum with adequate language and IQ may be assigned to a class for typical children, yet be provided an aide if his behaviors are sufficiently handicapping. But he could also ‘lose his diagnosis’ and the extra help if his social skills are adequate. Such changing diagnostic labels and services are the result of being forced to assign categorical labels for what are dimensional deficits of variable severity.

MY COMMENT: This last sentence is so important! Subjective judgments are being used to make diagnosis instead of fact-based evidence in individual children. Why? It’s easier, cheaper and – expands the number of children who will be diagnosed by people without experience and credentials. Never forget that diagnosis and treatment produces immense profits. Below: We prefer to drug our children into submission instead of utilizing treatment. This is inhumane and morally reprehensible.

Parents and teachers — and especially financially-strapped school districts — often ask for medication to minimize the expenditures associated with coping with such children’s behaviors. A review of medications widely prescribed for children with ASD concludes that only two fulfill research criteria for efficacy. And those two help only with aggressive behaviors, and have troublesome side effects1. A companion review indicates that the most effective intervention is intensive individualized intervention starting at the earliest possible age2.

These findings agree with my longstanding clinical experience. Knowing that no medical treatment is curative, as a neurologist I worry about potential long-term consequences of psychotropic medications and advocate for behavioral management, which, although more expensive, may help the child learn and permanently alter his brain development.

Lessons learned: Autism has come a long way since I entered this field half a century ago. We have done away with the theory of refrigerator mothers. We know that autism spectrum disorders are disorders of the developing brain and have learned that genetics plays a major — but by no means exclusive — role in their cause. We are aware that epilepsy is linked to autism in ways we do not yet understand fully, but that it needs to be treated vigorously, especially if it occurs in infants and toddlers.

We need much more neurological, and genetic information, even though at present it rarely leads to a change in management of the child or provides firm genetic counseling for the family.

We are convinced that massive and expensive tests such as imaging, electrophysiology and genetics are almost always uninformative clinically, unless there are features that raise the suspicion of a diagnosable condition.

Families deserve the credit for major steps forward, because they banded together to insist that the results of such tests be collected and stored in large data banks accessible to researchers. Parents got the ball rolling by raising money and persuading the National Institutes of Health and the research community that autism spectrum disorders were crying out for investigation and for novel educational and medical treatments.

I salute them and anticipate that we can make much more progress if we can find the means to sustain the momentum of the past decade.

Isabelle Rapin was professor of neurology and pediatrics at the Albert Einstein College of Medicine in New York.

 

Do Statistics Lie? Yes They Do / 3 Articles Explain HOW AND WHO

This scandalous practice of deceit-for-funding-and-profit is why I persist in slamming psychology as “not science”

It’s not only that these are research scams that waste funding and devalue science;  human beings are harmed as a result from this abuse of statistics. Asperger and neurodiverse types are being “defined” as “defective” human beings: there is no scientific basis for this “socially-motivated” construct. The current Autism-ASD-Asperger Industry is a FOR PROFIT INDUSTRY that exploits individuals, their families, schools, communities, tax-payers and funding for research. It also serves to enforce “the social order” dictated by elites.

The Mind-Reading Salmon: The True Meaning of Statistical Significance

By Charles Seife on August 1, 2011 16

If you want to convince the world that a fish can sense your emotions, only one statistical measure will suffice: the p-value.

The p-value is an all-purpose measure that scientists often use to determine whether or not an experimental result is “statistically significant.” Unfortunately, sometimes the test does not work as advertised, and researchers imbue an observation with great significance when in fact it might be a worthless fluke.

Say you’ve performed a scientific experiment testing a new heart attack drug against a placebo. At the end of the trial, you compare the two groups. Lo and behold, the patients who took the drug had fewer heart attacks than those who took the placebo. Success! The drug works!

Well, maybe not. There is a 50 percent chance that even if the drug is completely ineffective, patients taking it will do better than those taking the placebo. (After all, one group has to do better than the other; it’s a toss-up whether the drug group or placebo group will come up on top.)

The p-value puts a number on the effects of randomness. It is the probability of seeing a positive experimental outcome even if your hypothesis is wrong. A long-standing convention in many scientific fields is that any result with a p-value below 0.05 is deemed statistically significant. An arbitrary convention, it is often the wrong one. When you make a comparison of an ineffective drug to a placebo, you will typically get a statistically significant result one time out of 20. And if you make 20 such comparisons in a scientific paper, on average, you will get one signif­icant result with a p-value less than 0.05—even when the drug does not work.

Many scientific papers make 20 or 40 or even hundreds of comparisons. In such cases, researchers who do not adjust the standard p-value threshold of 0.05 are virtually guaranteed to find statistical significance in results that are meaningless statistical flukes. A study that ran in the February issue of the American Journal of Clinical Nutrition tested dozens of compounds and concluded that those found in blueberries lower the risk of high blood pressure, with a p-value of 0.03. But the researchers looked at so many compounds and made so many comparisons (more than 50), that it was almost a sure thing that some of the p-values in the paper would be less than 0.05 just by chance.

The same applies to a well-publicized study that a team of neuroscientists once conducted on a salmon. When they presented the fish with pictures of people expressing emotions, regions of the salmon’s brain lit up. The result was statistically signif­icant with a p-value of less than 0.001; however, as the researchers argued, there are so many possible patterns that a statistically significant result was virtually guaranteed, so the result was totally worthless. p-value notwithstanding, there was no way that the fish could have reacted to human emotions. The salmon in the fMRI happened to be dead.

________________________________

Statistical Significance Abuse

A lot of research makes scientific evidence seem more “significant” than it is

updated Sep 15, 2016 (first published 2011) by Paul Ingraham, Vancouver, Canada 

I am a science writer and a former Registered Massage Therapist with a decade of experience treating tough pain cases. I was the Assistant Editor of ScienceBasedMedicine.org for several years.

SUMMARY

Many study results are called “statistically significant,” giving unwary readers the impression of good news. But it’s misleading: statistical significance means only that the measured effect of a treatment is probably real (not a fluke). It says nothing about how large the effect is. Many small effect sizes are reported only as “statistically significant” — it’s a nearly standard way for biased researchers to make it found like they found something more important than they did.

This article is about two common problems with “statistical significance” in medical research. Both problems are particularly rampant in the study of massage therapy, chiropractic, and alternative medicine in general, and are wonderful examples of why science is hard, “why most published research findings are false” and genuine robust treatment effects are rare:

  1. mixing up statistical and clinical significance and the probability of being “right”
  2. reporting statistical significance of the wrong dang thing

Significance Problem #1 Two flavours of “significant”: statistical versus clinical

Research can be statistically significant, but otherwise unimportant. Statistical significance means that data signifies something… not that it actually matters.

Statistical significance on its own is the sound of one hand clapping. But researchers often focus on the the positive: “Hey, we’ve got statistical significance! Maybe!” So they summarize their findings as “significant” without telling us the size of the effect they observed, which is a little devious or sloppy. Almost everyone is fooled by this — except 98% of statisticians — because the word “significant” carries so much weight. It really sounds like a big deal, like good news. But it’s like bragging about winning a lottery without mentioning that you only won $25.

Statistical significance without other information really doesn’t mean all that much. It is not only possible but common to have clinically trivial results that are nonetheless statistically significant. How much is that statistical significance is worth? It depends … on details that are routinely omitted; which is convenient if you’re pushing a pet theory, isn’t it?

Imagine a study of a treatment for pain, which has a statistically significant effect, but it’s a tiny effect: that is, it only reduces pain slightly. You can take that result to the bank (supposedly) — it’s real! It’s statistically significant! But … no more so than a series of coin flips that yields enough heads in a row to raise your eyebrows. And the effect was still tiny. So calling these results “significant” is using math to put lipstick on a pig.

There are a lot of decorated pigs in research: “significant” results that are possibly not even that, and clinically boring in any case.

Just because a published paper presents a statistically significant result does not mean it necessarily has a biologically meaningful effect.

++++++++++++++++++++++++++++++++

Science Left Behind: Feel-Good Fallacies and the Rise of the Anti-Scientific Left, Alex Berezow & Hank Campbell

If you torture data for long enough, it will confess to anything.

P-values, where P stands for “please stop the madness”

Small study proves showers work Too often people smugly dismiss a study just because of small sample size, ignoring all other considerations, like effect size … a rookie move. For instance, you really do not need to test lots of showers to prove that they are an effective moistening procedure. The power of a study is a product of both sample and effect size (and more).

Statistical significance is boiled down to one convenient number: the infamous, cryptic, bizarro and highly over-rated P-value. Cue Darth vader theme. This number is “diabolically difficult” to understand and explain, and so p-value illiteracy and bloopers are epidemic (Goodman identifies ““A dirty dozen: twelve p-value misconceptions””4). It seems to be hated by almost everyone who actually understands it, because almost no one else does. Many believe it to be a blight on modern science.5 Including the American Statistical Association — and if they don’t like it, should you?

The mathematical soul of the p-value is, frankly, not really worth knowing. It’s just not that fantastic an idea. The importance of scientific research results cannot be jammed into a single number (and nor was that ever the intent). And so really wrapping your head around it no more important than learning the gritty details of the Rotten Tomatoes algorithm when you’re trying to decide whether to see that new Godzilla (2014) movie.7

What you do need to know is the role that p-values play in research today. You need to know that “it depends” is a massive understatement, and that there are “several reasons why the p-value is an unobjective and inadequate measure of evidence”8 Because it is so often abused, it’s way more important to know what the p-value is NOT than what it IS. For instance, it’s particularly useless when applied to studies of really outlandish ideas. And yet it’s one of the staples of pseudoscience, because it is such an easy way to make research look better than it is.

Above all, a good p-value is not a low chance that the results were a fluke or false alarm — which is by far the most common misinterpretation (and the first of Goodman’s Dirty Dozen). The real definition is a kind of mirror image of that:11 it’s not a low chance of a false alarm, but a low chance of an effect that actually is a false alarm. The false alarm is a given! That part of the equation is already filled in, the premise of every p-value. For better or worse, the p-value is the answer to this question: if there really is nothing going on here, what are the odds of getting these results? A low number is encouraging, but it doesn’t say the results aren’t a fluke, because it can’t — it was calculated by assuming they are.

The only way to actually find out if the effect is real or a fluke is to do more experiments. If they all produce results that would be unlikely if there was no real effect, then you can say the results are probably real. The p-value alone can only be a reason to check again — not statistical congratulations on a job well done. And yet that’s exactly how most researchers use it. And most science journalists.

The problem with p-values

Academic psychology and medical testing are both dogged by unreliability. The reason is clear: we got probability wrong

The aim of science is to establish facts, as accurately as possible. It is therefore crucially important to determine whether an observed phenomenon is real, or whether it’s the result of pure chance. If you declare that you’ve discovered something when in fact it’s just random, that’s called a false discovery or a false positive. And false positives are alarmingly common in some areas of medical science. 

In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’, focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations.

For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?

The problem of how to distinguish a genuine observation from random chance is a very old one. It’s been debated for centuries by philosophers and, more fruitfully, by statisticians. It turns on the distinction between induction and deduction. Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask. 

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what would be expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

Confusion between these two quite different probabilities lies at the heart of why p-values are so often misinterpreted. It’s called the error of the transposed conditional. Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong.

Suppose, for example, that you give a pill to each of 10 people. You measure some response (such as their blood pressure). Each person will give a different response. And you give a different pill to 10 other people, and again get 10 different responses. How do you tell whether the two pills are really different?

The conventional procedure would be to follow Fisher and calculate the probability of making the observations (or the more extreme ones) if there were no true difference between the two pills. That’s the p-value, based on deductive reasoning. P-values of less than 5 per cent have come to be called ‘statistically significant’, a term that’s ubiquitous in the biomedical literature, and is now used to suggest that an effect is real, not just chance.

But the dichotomy between ‘significant’ and ‘not significant’ is absurd. There’s obviously very little difference between the implication of a p-value of 4.7 per cent and of 5.3 per cent, yet the former has come to be regarded as success and the latter as failure. And ‘success’ will get your work published, even in the most prestigious journals. That’s bad enough, but the real killer is that, if you observe a ‘just significant’ result, say P = 0.047 (4.7 per cent) in a single test, and claim to have made a discovery, the chance that you are wrong is at least 26 per cent, and could easily be more than 80 per cent. How can this be so?

For one, it’s of little use to say that your observations would be rare if there were no real difference between the pills (which is what the p-value tells you), unless you can say whether or not the observations would also be rare when there is a true difference between the pills. Which brings us back to induction.

The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). But how to use his famous theorem in practice has been the subject of heated debate ever since.

Take the proposition that the Earth goes round the Sun. It either does or it doesn’t, so it’s hard to see how we could pick a probability for this statement. Furthermore, the Bayesian conversion involves assigning a value to the probability that your hypothesis is right before any observations have been made (the ‘prior probability’). Bayes’s theorem allows that prior probability to be converted to what we want, the probability that the hypothesis is true given some relevant observations, which is known as the ‘posterior probability’.

These intangible probabilities persuaded Fisher that Bayes’s approach wasn’t feasible. Instead, he proposed the wholly deductive process of null hypothesis significance testing. The realisation that this method, as it is commonly used, gives alarmingly large numbers of false positive results has spurred several recent attempts to bridge the gap.  

There is one uncontroversial application of Bayes’s theorem: diagnostic screening, the tests that doctors give healthy people to detect warning signs of disease. They’re a good way to understand the perils of the deductive approach.

In theory, picking up on the early signs of illness is obviously good. But in practice there are usually so many false positive diagnoses that it just doesn’t work very well. Take dementia. Roughly 1 per cent of the population suffer from mild cognitive impairment, which might, but doesn’t always, lead to dementia. Suppose that the test is quite a good one, in the sense that 95 per cent of the time it gives the right (negative) answer for people who are free of the condition. That means that 5 per cent of the people who don’t have cognitive impairment will test, falsely, as positive. That doesn’t sound bad. It’s directly analogous to tests of significance which will give 5 per cent of false positives when there is no real effect, if we use a p-value of less than 5 per cent to mean ‘statistically significant’.

But in fact the screening test is not good – it’s actually appallingly bad, because 86 per cent, not 5 per cent, of all positive tests are false positives. So only 14 per cent of positive tests are correct. This happens because most people don’t have the condition, and so the false positives from these people (5 per cent of 99 per cent of the people), outweigh the number of true positives that arise from the much smaller number of people who have the condition (80 per cent of 1 per cent of the people, if we assume 80 per cent of people with the disease are detected successfully). There’s a YouTube video of my attempt to explain this principle, or you can read my recent paper on the subject.

Notice, though, that it’s possible to calculate the disastrous false-positive rate for screening tests only because we have estimates for the prevalence of the condition in the whole population being tested. This is the prior probability that we need to use Bayes’s theorem. If we return to the problem of tests of significance, it’s not so easy. The analogue of the prevalence of disease in the population becomes, in the case of significance tests, the probability that there is a real difference between the pills before the experiment is done – the prior probability that there’s a real effect. And it’s usually impossible to make a good guess at the value of this figure.

An example should make the idea more concrete. Imagine testing 1,000 different drugs, one at a time, to sort out which works and which doesn’t. You’d be lucky if 10 per cent of them were effective, so let’s proceed by assuming a prevalence or prior probability of 10 per cent.  Say we observe a ‘just significant’ result, for example, a P = 0.047 in a single test, and declare that this is evidence that we have made a discovery. That claim will be wrong, not in 5 per cent of cases, as is commonly believed, but in 76 per cent of cases. That is disastrously high. Just as in screening tests, the reason for this large number of mistakes is that the number of false positives in the tests where there is no real effect outweighs the number of true positives that arise from the cases in which there is a real effect.

In general, though, we don’t know the real prevalence of true effects. So, although we can calculate the p-value, we can’t calculate the number of false positives. But what we can do is give a minimum value for the false positive rate. To do this, we need only assume that it’s not legitimate to say, before the observations are made, that the odds that an effect is real are any higher than 50:50. To do so would be to assume you’re more likely than not to be right before the experiment even begins.

If we repeat the drug calculations using a prevalence of 50 per cent rather than 10 per cent, we get a false positive rate of 26 per cent, still much bigger than 5 per cent. Any lower prevalence will result in an even higher false positive rate.

The upshot is that, if a scientist observes a ‘just significant’ result in a single test, say P = 0.047, and declares that she’s made a discovery, that claim will be wrong at least 26 per cent of the time, and probably more.

No wonder then that there are problems with reproducibility in areas of science that rely on tests of significance.

What is to be done? For a start, it’s high time that we abandoned the well-worn term ‘statistically significant’. The cut-off of P < 0.05 that’s almost universal in biomedical sciences is entirely arbitrary – and, as we’ve seen, it’s quite inadequate as evidence for a real effect. Although it’s common to blame Fisher for the magic value of 0.05, in fact Fisher said, in 1926, that P = 0.05 was a ‘low standard of significance’ and that a scientific fact should be regarded as experimentally established only if repeating the experiment ‘rarely fails to give this level of significance’.

The ‘rarely fails’ bit, emphasised by Fisher 90 years ago, has been forgotten. A single experiment that gives P = 0.045 will get a ‘discovery’ published in the most glamorous journals. So it’s not fair to blame Fisher, but nonetheless there’s an uncomfortable amount of truth in what the physicist Robert Matthews at Aston University in Birmingham had to say in 1998:

‘The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and flukes into funding. It is time to pull the plug.’

The underlying problem is that universities around the world press their staff to write whether or not they have anything to say. This amounts to pressure to cut corners, to value quantity rather than quality, to exaggerate the consequences of their work and, occasionally, to cheat. People are under such pressure to produce papers that they have neither the time nor the motivation to learn about statistics, or to replicate experiments. Until something is done about these perverse incentives, biomedical science will be distrusted by the public, and rightly so. Senior scientists, vice-chancellors and politicians have set a very bad example to young researchers. As the zoologist Peter Lawrence at the University of Cambridge put it in 2007:

hype your work, slice the findings up as much as possible (four papers good, two papers bad), compress the results (most top journals have little space, a typical Nature letter now has the density of a black hole), simplify your conclusions but complexify the material (more difficult for reviewers to fault it!)

But there is good news too. Most of the problems occur only in certain areas of medicine and psychology. And despite the statistical mishaps, there have been enormous advances in biomedicine. The reproducibility crisis is being tackled. All we need to do now is to stop vice-chancellors and grant-giving agencies imposing incentives for researchers to behave badly.

This last paragraph is an egregious act of “FRAMING” – that is diluting and denying what one just said by establishing a “positive” CONTEXT “But there is good news too” “advances in biomedicine” “crisis being tackled” “it’s vice-chancellors and grant-giving agencies fault” (not the poor beleaguered researchers who are “forced to” be dishonest!