Aphantasia / Unable to Produce Mental Images

Aphantasia: I noticed someone had used this search term and reached this blog. Having not heard of aphantasia, I looked it up.

Aphantasia: A life without mental images  By James Gallagher Health editor, BBC News website, 26 August 2015

“Most people can readily conjure images inside their head – known as their mind’s eye. But this year scientists have described a condition, aphantasia, in which some people are unable to visualise mental images.”

The article offered a short test – You scored 40 out of 40

This score suggests that your visual imagery is more vivid than usual. Scores at the upper end of this range are suggestive of ‘hyperphantasia’: exceptionally strong powers of visualisation. About 23% of people score in this range, the highest of our five bands. If you consider your imagery to be exceptionally strong, and would like to be included in future research, you can contact the team at Exeter University through this email: a.zeman@exeter.ac.uk

The test, though simple, made me aware that “demanding an image,” which is what the test does, makes visualizing even easier. I can zoom in on skin texture; see just a piece of clothing or the whole person, and can animate the person’s movement. I think this is because, as a visual Asperger, I view these details all the time, so they get saved in my memory. Someone who doesn’t pay attention to the environment simply wouldn’t have a detailed memory to “visualize.” Also, since I take many photographs, and have for years, I would imagine this activity increases visual memory; on the other hand, my interest in a wide range of images, from the entire landscape to detailed “natural” processes probably reflects my visual “hyperphantasia.”

On the other, other hand, I never photograph people, but can visualize people. It’s possible that visual Aspergers don’t look people in the eye (face) because the image is just too clear, detailed and intimate. (Like Fred’s nose) Do NT’s really want to know that this is how we see people?

How could I forget this face?

How could I forget this face?

 

How well does your “Mind’s Eye” work? / Aphantasia

New York Times / SCIENCE /6/22/2015

Picture This? Some Just Can’t

By Carl Zimmer

Certain people, researchers have discovered, can’t summon up mental images — it’s as if their mind’s eye is blind. This month in the journal Cortex, the condition received a name: aphantasia, based on the Greek word phantasia, which Aristotle used to describe the power that presents visual imagery to our minds.

In 2005, a 65-year-old retired building inspector paid a visit to the neurologist Adam Zeman at the University of Exeter Medical School. After a minor surgical procedure, the man suddenly realized he could no longer conjure images in his mind. Dr. Zeman couldn’t find any description of such a condition in medical literature. For decades, scientists had debated how the mind’s eye works, and how much we rely on it to store memories and to make plans for the future.

The patient agreed to a series of examinations. He proved to have a good memory and he performed well on problem-solving tests. His only unusual mental feature was an inability to see mental images. Dr. Zeman and his colleagues scanned the man’s brain as he performed certain tasks. First he looked at faces of famous people and named them. Certain regions of his brain became active, the same ones that become active in other people who look at faces.

Then the scientists showed him only names and asked him to picture their faces. In normal brains, some of those face-recognition regions again become active. In the patient’s brain, none of them did. The patient could however answer questions that would seem to require a working mind’s eye. He could tell the scientists the color of Tony Blair’s eyes, for example, and name the letters of the alphabet that have low-hanging tails, like g and j. These tests suggested his brain used some alternate strategy to solve visual problems.

Something remarkable happened: the patient was not alone.

“I have spent my entire life explaining to people that I do not think visually,” one reader wrote to me. “I cannot conjure a mental image of a person or of a place to save my life.” 

It turned out that Dr. Zeman and his colleagues were also hearing from people who thought they had the condition. The scientists decided to make a formal study of their email correspondents. They replied to emails with a questionnaire designed to probe the mind’s eye. All told, the researchers have received 21 responses.

The scientists asked their subjects to picture things like a sunrise. Try as they might, most of the respondents couldn’t see anything. But some of them did report rare, involuntary flashes of imagery. The mention of a friend’s name, for instance, might briefly summon a face. When the scientists asked their subjects to mentally count the windows in their house or apartment, 14 succeeded. They seem to share the  ability to use alternate strategies to get around the lack of a mind’s eye.

All in all, Dr. Zeman and his colleagues were struck by how similar the results of the survey were.

“These people seemed to be describing something consistent,” Dr. Zeman said. Rather than being a unique case, the original patient may belong to an unrecognized group of people.

In their new report, the scientists note that many of the survey respondents differ in an important way. While the original patient started out with a mind’s eye, the others never did. If aphantasia is real, it is possible that injury causes some cases while others begin at birth.

Thomas Ebeyer, a 25-year-old Canadian student, discovered his condition four years ago while talking with a girlfriend. He was shocked that she could remember what a friend had been wearing a year before. She replied that she could see a picture of it in her mind.

“I had no idea what she was talking about,” he said in an interview. Mr. Ebeyer was surprised to discover that everyone he knew could summon images to their minds. “I’d been searching forever on Google, but I didn’t know what to look for,” he said. “It was really empowering just to hear a story of someone else who had it.”

Mr. Ebeyer got in touch with Dr. Zeman, who sent him the questionnaire. Like many other subjects, he could count his windows without actually picturing his house. “It’s weird and hard to explain,” he said. “I know the facts. I know where the windows are.” The new study has brought Mr. Ebeyer some relief. “There’s something I can call this now,” he said.

Dr. Zeman now wonders just how common aphantasia is. “Moderately rare” is his guess, but to follow up, he has sent the questionnaire to thousands of people in Exeter.

He hopes to find enough people with the condition to begin a bigger scanning study, comparing their brains with those of people who see vivid mental images. Speaking of which — Dr. Zeman said that he was interested in meeting more people with aphantasia. He can be reached at zeman@exeter.ac.uk. _________________________________________________________________________

IMG_0747fb

My need for “wide open natural spaces” may be a way of compensating for sensory overstimulation. Crowded human environments are unbearably “war-zone” like.

ME: I’ve italicized peculiar non-scientific phrases that refer to picture memory – “having a mind’s eye” would seem as if there is an organ in the mind that functions as an eye and it either works or it doesn’t. Verbs such as “conjure” and “summon” sound as if this is a magical process that requires some “hidden” effort. I don’t think these descriptions are peculiar to the writer, but are common usage.

As a visual thinker, what do I think of this? It’s almost impossible to know whether what I experience as visual memory is the same as that of people with “normal brains” (here we go again). Is my default visual processing an extreme form of what is normal or something else entirely? I would say that I don’t conjure, summons or make an effort to “see” picture memories; they aren’t “conscious” but belong to an unconscious system. Conscious thought is word thought; visual thinking is intuitive / unconscious.

An analogy might be that I experience something like an ATM machine; the card that I insert contains a request for information or an answer. (The answer would be in the form of a pattern, connections, processes or images.) The request goes into the machine, which whirrs and clunks and at some point (there is no timetable or deadline) an answer “appears”. (Don’t ask me where!) The images at work inside the machine are invisible – I don’t need to “see” them. The “answers” that emerge are visual: (my brain has processed visual memory, so what else but images would result?

The results may be geometric arrangements, categories of impressions; connections, patterns and various types of relationships; not linear, but 3-D and “moveable”. Sometimes I can verbalize (translate into words) these visual andwers; many I cannot. Often I “see” word concepts and structures a “visual” – my brain converts words / written work as graphic relationships so that the “structure” of thought “pops out”. The point is, visual thinking utilizes visual memory – it’s not a photo album that one looks through to find a picture of Uncle Albert.

Okay visual thinkers: help me out! How do you experience visual thinking?

Understanding Neurodiversity / Neuro-exceptional Reality

 

I doubt that social typical people “get it” – that is, that being Aspie is a “way of being” that unfolds-develops-becomes a person with an “exceptional” sensory and brain processing type. Exceptional, not meaning “superior” but out of the ordinary, just as an athlete or musician may demonstrate “exceptional” talents and abilities in those areas.

We are born this way; the result is necessarily our own diverse “psychology” that varies between individuals, just as it does across Homo sapiens, but we share certain characteristics that seem to be “hard-core” or “wired in” and this determines our “different” type of brain function. One is a dominant visual orientation. This in itself accounts for our peculiar-eccentric-strange affect and behavior in social situations. As I have outlined in other posts, like many Aspies I am adept at verbal language, but it is a “second language”, subordinate to my natural, inborn “intuitive” style of thinking. The crux is that Aspies are continuingly translating “picture ideas” into words; pictures are dense with information; images are cross-referenced and updated automatically, which makes “translation into word language” an arduous task. The amount of information “in our memories” is vast and our “pattern – structure processing and identification is suited to natural environments, which are products of specific matter energy “rules”

Modern manmade environments are like a “war zone” for our senses; this includes the seemingly never-ending bombardment of our awareness by harsh lighting, mechanical sounds, chemical irritants, and the intense “jarring” verbal exchanges between individuals and groups, which “sound like” ceaseless battles over getting attention and status seeking – conflicts which are never resolved, but must go on and on in order to maintan a hierarchical power structure that is rigid, but open to constant skirmishes “at the borders”. No one seems satisfied with stability; challenges and ugly attacks are acceptable. So is bullying, dishonesty, and rampant insincerity; inequality is required by the “structure” of the system; a pyramid of “who counts” necessitates discrimination, injustice and suffering.

So – you can see that the social order that we are tasked with fitting into, conforming to and embracing as “reality” encompasses a wide range of experiences, from physical environments to ethical and moral considerations. We spend a lot of time listening, observing and “self-protecting” – out of necessity. And yes, we “run away” when social conditions literally harm our precious equilibrium, which is a “totality” of feeling that derives from the natural world; from physical reality: that “bedrock of all existence” which social people ignore. The logic of forms, structures and forces which create physical reality is the environment that “matches” our physiology and sets the parameters of our psychology.

What is especially disturbing is that the “helping, caring, fixing industry” (which is what I call the profit-making industry that rakes in over $9 BILLION per year in the U.S. alone in “autism” revenues) is a mega-business that literally owns ASD-Autism-Asperger’s-Mental-Illness because its “priests” control the definition of “pathologic human behavior” and present these “socio-cultural judgements” to the public and to government agencies as “universal standards” – not true! The industry receives millions in tax-payer funding; fuels Big Pharma profits, and they really don’t care whose lives they “screw up”.

It is in their primary interest ($$$) to treat “us” like a commodity (like chickens, hogs or soybeans) – and to control that commodity in the “marketplace”. “Chronic illness” of every type, (such as diabetes, heart disease) is so profitable because each patient becomes a lifelong “slave” to the medical system. Ring a bell? That’s what has been done to “autism” – a manufactured epidemic sucking in billions and “trapping” hundreds of thousands of people, from birth to death, in the profit system. The last thing the HCFI want is accurate diagnosis or effective treatment. As it stands, almost any child can be thrown into the “autistic” pot because there is no “credible” diagnosis; autism in the U.S. today is no more “real” than the hysteria provoked by an ever-expanding intrusion into the lives of children and their families by the “helping, caring, fixing” industry which holds incredible power over the fate of children in the U.S. This “withcraft” type inquisition, in which children are labeled as “outcasts” from birth, merely for social reasons, is intolerable, and cheats children who do have disabilities that can be identified and treated from receiving care; limited funding turns treatment opportunities into a political competition, fueling more fear and hysteria.

Unrealistic demands that supposedly define “normal” childhood development and performance subject children and their parents to threats of social condemnation and exile. This is an especially cruel consequence for social typical Americans whose “value” depends on their status in the social hierarchy. The Social Pyramid is always the underlying measure of “who gets what” in our “land of opportunity” As for ASD “folks” we aren’t even on the pyramid; our indifference to status, our “world view” and way of being really do prevent us from being accepted as “legimate” human beings. Still – we inevitably are involved in surviving within a majority “magical-irrational-social reality” that dominates the U.S. Try to comprehend the accommodations this situation requires. It is “common sense” that we suffer “symptoms-reactions” that are not internal to our native “being” but are the result of severe challenges and “rejection” by people who cannot accept that Homo sapiens is a highly diverse species; and that modern social humans are not “the one and only” definition of what it is to be “human’.

 

Questioning Autism Research / NatGeo blog Virginia Hughes

A post published by the Nat-Geo blog, PHENOMENA, and written by Virginia Hughes, provides vindication of my ongoing opposition to misguided Autism and Asperger research. Someday we must confront the truth: that Asperger’s is a social diagnosis based on the neurotypical prejudice that there is only one human brain type. It is impossible for 7 billion Homo sapiens to be identical; to repeat a stiflingly narrow set of culturally-determined behaviors based on Euro-American models of psychological social correctness. Our strength as a species is that we do not all think and behave in lockstep; diversity of thought and perception has produced innovation – by individuals, not societies, which do their best to hamper healthy human development.

Category Fail, Virginia Hughes

(click title to access blog)

I’ve written a lot of stories about autism research, and I’d say one of the biggest scientific developments in the past few years was the creation of ‘autistic’ mice. Researchers first found many, many genes associated with (translation – no provable connection) autism in people, and then created dozens of mouse models that carry one or more of those same genetic glitches. In the fall of 2011, for example, one team debuted mice with extra copies of a gene called UBE3A. Approximately 1 to 3 percent of children with autism carry extra copies of the same gene.  (Can this tiny percentage be claimed as significant? No.)  These mutant mice show little interest in social interactions, compared with controls. They also emit fewer vocalizations and repetitively groom themselves. (Dear reader, you do realize that autistic people are perceived to be “big autistic rats” by people who torture animals for a living?) This was heralded as something of an autism trifecta, as the animals mimicked the three ‘core’ symptoms of people with the disorder: deficits in social behaviors and in communication, as well as repetitive behaviors. (How great and ridiculous a leap, equating mice grooming themselves to repetitive behavior in humans.)

The same goes for mouse models based on environmental, rather than genetic triggers. Mice whose mothers got an infection while pregnant end up with abnormal social interactions and vocalizations, and they repetitively bury marbles. Well – that certainly proves something! If you are an Asperger reading this, you have the right to feel insulted.

Once again, the animals show all three “core deficits,” (deficits that are invented symptoms of an invented disorder) and are thus considered to be a valid model of autism. (A truly pathetic conclusion – what’s next? They sacrifice the rats; drink the blood and chant “funding, funding, funding”) There’s a nice and tidy logic to this approach, understandably appealing to neuroscientists. If a mouse model mimics the three behaviors used to define autism,  then studying the cells and circuits of those mice could lead us to a better understanding of the human disorder. (This is not logic, this is magical thinking; Save the poor rats, make autistic “voo-doo dolls” and stick pins in them) 

But there’s a big hole in that logic,

according to a provocative commentary published by Eric London in this month’s issue of Trends in Neurosciences. The problem is that the symptoms of autism — like those of all psychiatric disorders — vary widely from one person to the next. So using the fuzzy diagnostic category of ‘autism’ to guide research, he writes, “is fraught with so many problems that the validity of research conclusions is suspect. London begins with a short history of the Diagnostic and Statistical Manual of Mental Disorders, or DSM, the book that since 1980 has dictated what collections of symptoms define one disorder or another. There’s nothing wrong with a categorical diagnosis, per se. It can have enormous explanatory power. If a doctor diagnoses you with strep throat, for example, you have a good idea of what that is (a bacterial infection) and how you might treat it (antibiotics).

“A psychiatric diagnosis, by contrast, is rarely as informative,” London writes. People diagnosed with schizophrenia, bipolar disorder, depression, or autism often don’t know what caused the trouble, and they struggle with unpredictable symptoms, ineffective treatments, and unpredictable responses to those treatments. What’s more, most people who fall into the bucket of one psychiatric disorder also meet criteria for others.

London cites some fascinating numbers: Some 90 percent of people with schizophrenia, for example, have another diagnosis as well. More than 60 percent of people with autism have another diagnosis, and one-quarter have two or more. “Autism is comorbidly present in over 50 specific diagnoses comprising other genetic and medical conditions,” London writes.The three supposedly core behaviors of autism don’t correlate well with each other, he adds. In other words, many kids just have one or two of the three. Francesca Happe has conducted many studies suggesting that each of these symptoms is inherited independently, suggesting that each has its own, separate biological cause.The danger of focusing on these three behaviors is that it might cause clinicians and researchers to overlook other symptoms that are common in people with autism. Many kids with autism have gastrointestinal issues, for example, and many show a range of motor problems, such as head lag, trouble sitting up, or a wobbly gait. And more than 80 percent of people with autism have anxiety, London notes. (Gee whiz; I wonder why, when social typicals are so darn nice to us?)

Mouse models of the disorder may have some of these problems, too, but researchers don’t usually test for them.The DSM has tried to address some of these problems. Its latest version,  released last year, defines autism with two criteria: social and communication deficits, and repetitive behaviors. But London doesn’t think that goes nearly far enough, for all the reasons outlined above. He proposes an even broader category of “neurodevelopmental disorder,” which would include more than 20 different DSM categories, including autism and schizophrenia. Just as they do today, clinicians could still focus on specific symptoms — whether sensory sensitivities, anxiety, psychosis, attentional problems, etc. — when deciding how to treat each person. London’s commentary is only the latest in an old debate about diagnoses: Is it better to lump, or to split? Some scientists agree with him, others don’t, and I see merit in the scientific arguments on both sides. One point I think sometimes doesn’t get enough attention, though, is the social power of a diagnosis.

These labels carry meaning, for better or worse. For people with mysterious illness, such as chronic fatigue syndrome, a label can make them feel acknowledged and validated, or completely marginalized. Diagnoses for brain disorders, such as Asperger’s syndrome, can unite people under a common identity, or create dangerous societal stigma. Rational diagnostic categories are crucial for scientific progress, as London argues.

But scientists would do well to remember that their labels also have lasting consequences outside of the lab.

(But that would require empathy)

 111110_FRESCA_Rat_EX_jpg_CROP_article568-large

Biology of Emotional Behavior / Neuroscience Article

Published in Dialogues in Clinical Neuroscience 2002 Sep; 4(3): 231–249.

The biology of fear and anxiety-related behaviors

Thierry Steimer, PhD (55 papers)
From the Abstract:

In a book published in 1878 (Physiologie des passions), Charles Letourneau, who was contemporary with the French neuroanatomist Paul Broca, defined emotions as “passions of a short duration” and described a number of physiological signs and behavioral responses associated with strong emotions.1 Emotions are “intimately linked with organic life,” he said, and either result in an “abnormal excitation of the nervous network,” which induces changes in heart rate and secretions, or interrupt “the normal relationship between the peripheral nervous system and the brain.” Cerebral activity is focused on the source of the emotion; voluntary muscles may become paralyzed and sensory perceptions may be altered, including the feeling of physical pain. (Note that this is a description of a physiological event) This first phase of the emotional response is followed by a reactive phase, where muscles come back into action, but the attention still remains highly focused on the emotional situation.

With the knowledge of brain physiology and anatomy that was available at the end of the 19th century, hypotheses on the mechanisms possibly involved in emotions were of course limited. However, Letourneau assumed that “the strong cerebral excitation” that accompanies emotions probably only concerned “certain groups of conscious cells” in the brain and “must necessitate a considerable increase of blood flow in the cell regions involved.” (Curious – can a cell be “conscious?)

He also mentioned that the intensity, the expression, and the pathological consequences of emotions were directly linked to temperaments” (which he defined within the four classic Hippocratic categories). Note that hypotheses and speculation by early investigators are often grandfathered in as theories, by default – and become the guiding “concepts” of contemporary science, often without question. The reverse is also common: “good science” from the past may be dismissed, merely on the basis that “new” is better: the myth of inevitable linear progress!

The fact that emotions are “intimately linked with organic life,” his precise description of the sequence of the physiological and behavioral reactions that accompany a strong emotion, such as fear, the idea that emotions involve specific areas of the brain, and the theory (hypothesis, guess) that activation of these areas is associated with an increased blood flow have all been largely confirmed (waffling) by modern neuroscience. The suggestion (mandatory waffling since the following statement isn’t provable by scientific standards) –  that temperament or personality traits influence the “affective style” and vulnerability to psychopathology is also an important aspect of our modern approach to anxiety and mood disorders. Is this a description of a physiological phenomenon or an opinion advanced by Hippocrates?

_____________________________

See post: https://aspergerhuman.wordpress.com/brain-scans-dead-salmon  

Also search my blog: “neuroscience” “brain scans” for multiple related posts

___________________________________________________ 

For a long time, emotions were considered to be unique to human beings, and were studied mainly from a philosophical perspective.3 Evolutionary theories and progress in brain and behavioral research, physiology, and psychology have progressively introduced the study of emotions into the field of biology, and understanding the mechanisms, functions, and evolutionary significance of emotional processes is becoming a major goal of modern neuroscience. But! The takeover of human behavior, it’s definition as “pathology” by psychology (not a science) is already defeating this revolutionary  science-based inquiry. 

Three fundamental aspects of emotions

The modem era of emotion research probably started when it became obvious that emotions are not just “feelings” or mental states, but are accompanied by physiological and behavioral changes that are an integral part of them. Technically this is backwards: the physiology of organisms’ reactions to the environment, as produced by evolutionary processes, preceded by billions of years the manmade practice of “naming” those reactions as “emotions” – and claiming that emotion is exclusive to humans. The “exclusivity” idea that “emotion” is a phenomenon that occurs only in humans is utterly preposterous. Animals (and all organisms) could not exist without reacting  to and interacting with the environment; it’s logically and physically impossible. 

The socio-religious belief that our species is a special creation, and the universe is merely a stage-set for his magnificence is obnoxious – and as yet, despite claims that this narcissistic focus on MAN has been magically removed from the “human sciences” is obviously not true.

The “levels” scheme below, is not a reformation of prior mistakes, but functions to retain the socio-religious “metaphysical” control of human behavior, but disguised as the pseudoscience of modern psychology. By piggy-backing onto neuroscience, “priestly” power to define and enforce the social stratification of behavioral privilege at the top of the hierarchy) and rampant inequality, is retained by pathologizing group after group of “lesser” humans. Nice trick!!!

This has progressively led to today’s view of emotions being experienced or expressed at three different, but closely interrelated levels: Here we go: everything must be split into levels, regardless of how nature – our brain – actually works. The mental or psychological level (dominated by “approved” socio-religious prescriptions) the (neuro)physiological level, (what the brain-body does) and the behavioral level (socio-religious enforcement – social control). These three complementary aspects are present in even the most basic emotions, such as fear.

more at PubMed

Joseph Campbell on the Functions of Myth

Joseph Campbell:

“Myth basically serves four functions. The first is the mystical function,… realizing what a wonder the universe is, and what a wonder you are, and experiencing awe before this mystery….The second is a cosmological dimension, the dimension with which science is concerned – showing you what shape the universe is, but showing it in such a way that the mystery again comes through…. The third function is the sociological one – supporting and validating a certain social order…. It is the sociological function of myth that has taken over in our world – and it is out of date…. But there is a fourth function of myth, and this is the one that I think everyone must try today to relate to – and that is the pedagogical function, of how to live a human lifetime under any circumstances.”

 

(Literal) Human Sacrifice and the Social Order

I would add that “literal” human sacrifice (soldiers in war) is not the only type of “sacrifice” utilized to prop up the socio-economic order that is American Capitalism. Also, that human sacrifice originated as cannibalism that was later “ritualized” to serve social purposes.  
Scientific American Arts & Culture

How Human Sacrifice Propped Up the Social Order

By Philip Ball, April 5, 2016

Understanding the role of state-sanctioned killing does more than illuminate the social evolution of “premodern” cultures

James Frazer’s classic anthropological study The Golden Bough contains a harrowing chapter on human sacrifice in rituals of crop fertility and harvest among historical cultures around the world. Frazer describes sacrificial victims being crushed under huge toppling stones, slow-roasted over fires and dismembered alive.

Frazer’s methods of analysis wouldn’t all pass muster among anthropologists today (his work was first published in 1890), but it is hard not to conclude from his descriptions that what industrialized societies today would regard as the most extreme psychopathy has in the past been seen as normal—and indeed sacred—behaviour.

In almost all societies, killing within a tribe or clan has been strongly taboo; exemption is granted only to those with great authority. Anthropologists have suspected that ritual human sacrifice serves to cement power structures—that is, it signifies who sits at the top of the social hierarchy.

Sacrifice for social order

The idea makes intuitive sense, but until now there has been no clear evidence to support it. In a study published in Nature, Joseph Watts, a specialist in cultural evolution at the University of Auckland in New Zealand, and his colleagues have analysed 93 traditional cultures in Austronesia (the region that loosely embraces the many small and island states in the Pacific and Indonesia) as they were before they were influenced by colonization and major world religions (generally in the late 19th and early 20th centuries).

By delving into ethnographic records, the researchers tried to tease out the relationship between human sacrifice and social hierarchy. They find that the prevalence of sacrifice increased with the degree of social stratification: it occurred in 25% of cultures with little or no stratification, 37% of those with moderately stratified societies, and 67% of those that had a pronounced hierarchy.

And by mapping the evolutionary relationships between cultures, the team suggests that human sacrifice and social hierarchy co-evolved. Although societies can become more or less stratified over time, societies that practised sacrifice were less apt to revert to milder degrees of stratification.

In other words, human sacrifice seems to bolster stratification: it helped to stabilize hierarchy, and conceivably, therefore, had a common role in the development of highly stratified societies that generally persist even today.

Religious undertones

Human sacrifice seems to have been largely the privilege of priests or others who claimed religious authority. Watts and colleagues say that their results therefore disclose a “dark side” to the social role of religion. (They have previously shown that belief in supernatural punishing agencies in Austronesian cultures encouraged moral observance, and thereby promoted the emergence of stratified and complex social structures).

There’s a danger of overgeneralization from any study of this kind. Human sacrifice is no more likely than, for instance, music to have had a single role in early societies. In the third century bc, for example, Chinese administrator Li Bing eliminated the sacrifice of young maidens to a river god during the conquest of Sichuan by the First Emperor. Some have suggested that he called the bluff of a local racket in which families rid themselves of unwanted daughters while getting rich on the compensation they received. Whether or not that is true, it’s easy to imagine how rituals could be abused for prosaic gain.

And even in Austronesia, add Watts’s team, sacrifice wasn’t always conducted for purely religious reasons. It could have other motivations, including to punish taboo violations, demoralize underclasses, mark class boundaries and instil fear of social elites, all of which aim at building and maintaining social control. For this reason, says Michael Winkelman, an anthropologist now retired from Arizona State University in Tempe, “I suspect that Watts et al. are assessing some general notion of social legitimated killing.”

Such considerations complicate any interpretation of Watts’s results, but it also gives them considerably more contemporary resonance.

Death-penalty parallels

By today’s standards, human sacrifice scarcely seems to fall within the norms of good morality. But one doesn’t need to be a moral relativist to accept that the connections between human sacrifice, obedience to authority and stable governance persist. To perceive a link between ancient, “savage” human sacrifices and the death penalty in some modern societies isn’t to exaggerate or indulge in melodrama, as Winkelman’s remarks testify.

Certainly the suggestion could seem glib, and the parallels cannot be taken too far. Unlike today’s death penalties, traditional ritual sacrifice was generally for religious purposes and it tended to exhibit no bloodlust or contempt for the victims. Often they were seen as godlike, and before their sacrifice, they might be treated with reverence and affection, and perhaps fed well like the biblical fatted calf. The remains of the dead body—it’s not even clear whether the word “victim” is appropriate—were imbued with power. If the flesh was chopped up, it was to share out this potent relic among the tribe.

Yet a contemporary state’s arrogation of the right to slaughter through the death penalty—breaking an otherwise rigid prohibition—still serves as, among other things, a demonstration of authority and a ritual of appeasement, whether towards supposed religious strictures or public opinion.

To future anthropologists, whatever explanations or justifications states offer today for imposing capital punishment may seem less revealing than the broader view of how such sanctified killing reinforces the social order. We can expect time’s retrospective gaze to lay bare the real reasons why we, no less than the ancient Aztecs or Samoans, valorize murder.

This article was first published on April 5, 2016.

see also:

Washington Post / The ‘darker link’ between ancient human sacrifice and our modern world   / April 5, 2016

 

“Asperger’s” as a Distinct Diagnosis / The Social Battle

With them it’s always the dollars. Always the fuckin’ dollars.” Nicky Santoro, Casino 1995.

**************************************************************

The Case Against Asperger’s

It can be reasonable or disingenuous, inclusive or segregationist.

by Lucy Berrington  / Psychology Today, Oct 21, 2012

Lucy Berrington is a Massachusetts writer and the parent of a teen with Asperger syndrome. She serves on the Board of Directors of the Asperger’s Association of New England.

It seems likelier than not that the Asperger’s diagnosis will disappear from the Diagnostic and Statistical Manual of Mental Disorders (DSM). Seven months from now, the DSM-5 will be with us, and Asperger syndrome technically won’t. Actually, it’s hard to be sure about the date, since sticking to the publication timeline (link is external) has not proven a strength of the American Psychiatric Association. (May we diagnose an institutionalized executive function disorder? Or “abulia”, the neurological term for loss of drive?)

Outside the APA, the case against Asperger’s as a diagnosis has intriguingly crossed party lines. Many who advocate for the acceptance of autism have reached the same conclusion as some of the pro-cure factions: Asperger’s must go! This is a startling phenomenon the like of which we might never see again. And their reasons, needless to say, are exactly opposite. I’m with the Asperger’s must stay! party, and later I’ll explain why. But here’s the case against, as I understand it — and please weigh in if I miss your particular Asperger’s peeve or if you disagree with my ratings. Inevitably, some of these arguments overlap.

The consensus: the DSM-IV approach to autism isn’t working well (link is external). Clinicians have complained (and research (link is external) has shown) that differences between the autistic subtypes are largely subjective. Which means your diagnosis might be influenced by what your clinician had for breakfast or the Korean boy band track the kids insisted on in car pool. To accurately diagnose the various forms of autism would require a Sorting Hat of the Hogwarts type, with all its unfathomable wisdom, announcing of each client, “Asperger’s,” or “PDD-NOS,” or “Autistic Disorder!,” and never to be second-guessed.

Instead, the APA has applied its own special magic and conjured up the supposedly all-embracing Autism Spectrum Disorder (link is external).

Whether the new criteria improve on the old ones is beyond the scope of this blog post (though here’s (link is external) an excellent evaluation). Nevertheless, the old ones were problematic enough that the APA had a strong case for making revisions. Strangely, though, it wasn’t strong enough for the APA, and several members of the autism committees threw in additional, more objectionable rationalizations, which I’ll get to later. The result? Much clashing of wands and broomsticks, and suspicions that perhaps the motives of the APA were more sinister than at first appeared. I give this argument 3/5. Would have been 4/5 if the APA had handled it more persuasively.

AGAINST: Asperger’s is separatist and elitist, the label of choice for those distancing themselves from classic autism. In the same vein, mainstream media and pop culture have bestowed on Asperger’s syndrome benign, even flattering stereotypes (quirky geniuses and the like), promoting the acceptance of a small percentage of autistic people while most are still largely excluded.

Like when the coolest kid pals up with a token nerd (the cute one) in a smug display of reasonableness, but continues to persecute the others. This is the argument (link is external) of many self-advocates associated with the neurodiversity movement. It’s a real issue, but does it justify abolishing Asperger’s?

Maybe the recent cultural prominence of Asperger’s promotes a more mature lay understanding of autism and of developmental conditions, helping to shatter the myth that autism is disastrous. I accept that it’s a fine line, and fine lines are not easily drawn in our impulsive, melodramatic 24/7 news cycle. But still. Roy Richard Grinker (link is external), anthropologist and autism researcher, has said the Asperger’s diagnosis “broadened the public understanding of autism as a spectrum,” and “helped previously undiagnosed adults to understand their years of feeling unconnected to others, but without bestowing what was considered the stigma of autism.” He concluded, though, that Asperger’s has done its work, and the stigma of autism is no longer severe enough to justify retaining the Asperger’s diagnosis. I know self-advocates who agree with his conclusion but nevertheless protest his idea that the stigma around autism has seriously diminished. Implicitly, his initial argument remains valid. I appreciate the inclusionary instinct, though, so give this 3.5/5.

AGAINST: The separate Asperger’s category suggests it is a “mild” form of autism, and its challenges mild also.

This “mildness” (link is external) is a false Asperger stereotype. It minimizes the challenges faced by Aspergerians and their need for support and accommodations. And it reinforces the false dichotomy of “high/low functioning” autism. Autism is a complex, multi-dimensional condition. Its highly variable manifestations don’t necessarily correspond in a codifiable way with degrees of severity or need. Many Aspergerians face profound struggles (link is external) in multiple areas of life, including relationships, education, employment, housing and health.

Again, a real issue – but the abolition of the label doesn’t necessarily resolve it, and could introduce equivalent problems. 2.5/5.

AGAINST: Asperger’s isn’t autism anyway.

This tends to come from pro-cure advocates representing classically autistic people. Their argument: anyone who can effectively self-advocate is not disabled enough to be autistic. Here’s autism campaigner Lenny Schafer (link is external) leading the charge against Aspergerians: “let us hope that the upcoming DSM-V gets clearer about defining autism only as a disability — and kicks the high functioning ND [neurodiversity] autism squatters onto the personality disorder spectrum where they belong.” WOW! Nasty! –  Asperger individuals just can’t “get no respect” Everyone hates us…

My Aspergerian friend Phil, a well-connected advocate, describes the attempt to disenfranchise those with “lesser” forms of autism as “a sort of perversion of Groucho Marx’s famous quip that he wouldn’t ever belong to a country club that would have him as a member. The perversion is that ‘anybody who can effectively self-advocate is ipso facto not disabled enough to be considered truly autistic’ and should be voted off the island, through reclassification if by no other means.”

The case that Asperger’s isn’t autism seems to be based on misunderstandings of autism (link is external), the perception that neurodiversity-based arguments do not address some of the challenges associated with “classic” autism, and (well-grounded) fears around insufficient resources. I score this at 0.5/5. That half point represents my sympathy around the general struggle for services.

Politics aside (or, at least, not exactly central), there’s a case being made that Asperger syndrome sometimes isn’t autism, or autism as we must come to understand it. A study analyzing (link is external) the impact of the new criteria suggests 10% of the children who’d receive a diagnosis under DSM-IV requirements would not qualify under DSM-5. The new criteria seem likely to exclude those whose language struggles are largely pragmatic.

Which brings us to….

AGAINST: The Asperger’s diagnosis has been too liberally applied, pathologizing social awkwardness and straining educational budgets.

Note that the “discussions” about “who has what” are substantially social-economic-political arguments about the HIERARCHY of status among “defective people”. 

We may have no place on the “Neurotypical Social Pyramid” but we  get our own little pyramid based on “classes of defective people” – kind of like being seated at the Children’s Table at Thanksgiving Dinner; only we never get to “grow up” and move to the Big People’s Table.

____________________________________________________________________________________

The Diagnosis Formerly Known as Asperger’s

Protesting and honoring Asperger’s in the troubled end of the DSM-IV era
 
By Lucy Berrington, Psychology Today, Sep 29, 2012

In the months after my young son was identified as having Asperger syndrome (link is external), I wondered whether he would outgrow his diagnosis. I never imagined that his diagnosis would outgrow him. Still, that’s happening with changes to the diagnostic criteria and terminology relating to autism. This is far more than a medical technicality. (It isn’t medical at all: it’s social) For autistic people and their allies, the issue is packed with implications for identity, community, and access to education and health services and legal protections. Raging opinion has long since burst the banks of the autism blogosphere and spilled into the mainstream media.

The revisions to the Diagnostic and Statistical Manual of Mental Disorders (link is external) by the American Psychiatric Association are a worthy attempt to simplify and clarify the diagnostic criteria and terminology for autism. Forgive me if I don’t lovingly embrace them. The proposed changes (link is external) are radical, and clumsy handling by the APA has not helped them go over. The fifth edition of the DSM is due out in May next year. Among its various revisions, it will eliminate Asperger’s as a formal diagnosis and fold Aspergerians into the broad Autism Spectrum Disorder (link is external) category. (And, in theory, people diagnosed with Pervasive Developmental Disorder Not Otherwise Specified (link is external)— although some seem destined for the new Social Communication Disorder (link is external) diagnosis, which the APA insists is not a form of autism.)
____________________________________________________________________________________________
WHAT A MESS! This is not science or medicine. This is a “power” struggle over who gets to “define” the status (value) of human beings – and it’s all about money.  Psychology, as a tool of socio-economic control, serves to “normalize” abuse of individuals labeled as lesser beings; American Capitalists have built a highly profitable industry of “exploitation of defectives for profit.”
____________________________________________________________________________________________

When my son was born in 1996, the Asperger’s diagnosis was only two years old. Asperger’s Disorder, as it is known in the inherently pathologizing terminology of the APA, had been formally introduced (link is external) to researchers and clinicians in DSM-IV (link is external), published in 1994. This followed years of demands from autistic people and their allies for due recognition and support. Nevertheless, the principle of the new umbrella autism diagnosis makes some sense. Plenty of Aspergerians have already put themselves into the autism category, resisting the Asperger’s label for admirable reasons that I’ll look at in my next post. But at the Asperger’s Association of New England (AANE (link is external)), based in Watertown MA, where I serve on the Board of Directors, we witness the value of the Asperger’s diagnosis to many — and I’ll get to their reasons too.

The arguments over the Asperger’s label don’t align neatly with the usual autism factions. Some who reject it (link is external) are motivated by inclusion and solidarity. Others (link is external) appear to do so for reasons of exclusion and hierarchy. Needless to say, these groups aren’t snuggly bedfellows. Autism politics has been a treacherous business since long before the APA launched its latest debacle. Planning this blog, I’m thinking about the importance of not unfairly judging people whose experiences and perspectives I don’t share — and I’m uncomfortably aware that this decision could itself look like an unfair judgment on others.

What am I doing here, anyway? I’m the non-autistic parent of an Aspergerian-slash-autistic teen son and a typically developing tween. My friends include autistic adults who provide generous consultation services on all matters autistic (actually, on all matters) and straightforward (read: polite yet merciless) feedback on my rough drafts. They’ll make frequent appearances here. I’m a consumer of online coverage by self-advocates who engage with and tolerate me in varying degrees. I chair a fledgling Advocacy Committee at the AANE and edit its recently re-launched blog, AspBlogosphere (link is external). My involvement with the AANE inevitably influences my perspective, although my posts here do not represent its positions. And I’m part of a committee responsible for a pioneering Standardized Patient program at Tufts University School of Medicine (link is external), Boston, in which autistic people educate medical students about autism and the barriers to health care they encounter. My professional background is in journalism, I’m a candidate for a Masters of Science at Tufts, and I welcome good faith feedback and criticism. I’ll be reporting and remarking on issues, events and research affecting the Asperger’s community — families, friends, clinicians, and (primarily) people with Asperger’s and related profiles, regardless of which diagnostic label works for them. That which we call a rose by any other name would smell as sweet.

I’m reminded of the traditional chant on the death of a monarch, which acknowledges the immediate transfer of sovereignty to the heir. “The king is dead! Long live the king!”

And so:

“Asperger’s is dead! Long live Asperger’s!”

The fine Aspergerian mind will always be with us. As for the label, I’ve a feeling it won’t easily be wrested from those who find it helpful. I envision my son as an old man in 2070, croakily confiding that he has Asperger’s — or autism. And I’ll guess that whichever terminology he favors (and if we’ve done our jobs right) whoever’s listening will have a pretty decent sense of what he means.