How “moral” can a culture be, when the basis of morality is obedience to immoral beliefs?
We hear it over and over again: Racism is “alive and well” in American culture. Child abuse is epidemic. Domestic violence needs to stop. Drugs… “We” have to “have a discussion”…blah, blah blah. But the discussion that we never have, or will ever have, is the ORIGIN of this vicious theme in our culture; the JudeoChristian legacy of Biblical hatred for “lesser beings” and in fact, hatred of all nature. A fundamental world view dominated by rage against women, children, and “living things” – the objects of “Top Male” rage.
Prostitution of boys and girls; human trafficking, and sexual abuse are standard “religious” practice in patriarchies. These activities “glorify God” and “coincidently” empower the predators who created “god” in their own image. And it’s always about the “dollars” isn’t it? Note that “unequal value” is still the justification for unequal pay for women. This is the definition of pornography: humans treated as objects.
Gee Whiz! Not only is a woman supposed to be a “doormat” she’s suppose to be ecstatic about it. That’s the definition of sadism… Any question as to why sex = violence in American life?
“Love” – is supposed to exist in the “social typical” blind spot that demands that we “adore and obey” those who hurt us.
Child sexual abuse: it’s a family tradition.
This is not “old preaching” – contemporary Christianity thrives on obedience to thugs and con men.
Christian neoteny: The Jesus Cult. It’s me, me, me. There’s nothing “moral” about believing that your trivial nonsense is the object of an imaginary supernatural power’s all consuming interest – you are so important (indeed – the center of attention of the universe) that “god” must obey your every infantile command. Screw everyone else!
Despite decades of striving toward equality, gender biases appear prevalent amongst researchers in psychology. In a recently published study, the investigators found that psychology researchers most often compare females against an implicit male norm, rather than on their own or vice-a-versa.
When research finds that men and women differ psychologically, which group seems to be more responsible for the difference? Where gender differences were observed in the research examined, they were described as being about males, and less often than as being about females. Males are seen as the standard for the typical human subject. The research was published in the Review of General Psychology, by University of Surrey (UK) psychologists Peter Hegarty and Carmen Buechel. The authors systematically surveyed forty years of gender difference research in four journals published by the American Psychological Association.
“Even the graphs and tables show evidence of the male-norm effect” said Hegarty. “About three quarters of these positioned men’s data first, and made women the second sex. But this effect was reversed when psychologists depicted data about parents.” The conclusion? Men may be the prototype for modern psychology’s picture of the typical person, but mothers remain the most typical kind of parent.
The data are all the more striking as between 1965 and 2004 the journals studied ceased to be male-dominated. Roughly equal numbers of the study authors and roughly equal numbers of the participants in the studies now published in these journals are male and female. Hegarty doesn’t find it surprising that this shift in the body politic of psychology didn’t undo the male-norm effect. “In laboratory experiments, both women and men tend to spontaneously explain gender differences using a male norm and to attribute differences to females to the same degree. In our study, male and female authors of psychology articles focused their explanations on women to the same degree. Psychologists are not always aware of their implicit decisions about who to explain.”
Is the focus on women and girls a problem? Probably. (Probably???)
Hegarty has shown in other research on sexual orientation differences that stereotype-relevant results are explained in ways that perpetuate stereotypes about the group that is not taken as the norm; lesbians and gay men, in that case. Hegarty and Buechel also found that psychologists vastly preferred the phrase ‘more than’ over ‘less than’ when explaining gender differences. Put this together with the male-norm effect and you could reach the absurd conclusion that women and girls have more psychology than men and boys do.
Hegarty, a social psychologist himself, is optimistic about what the findings imply for the status of psychology. “They clearly show an area where more critical thinking is needed about gender, but on the other hand psychological methods allowed us to bring this issue to light and to describe it. Our conclusion is not that psychologists should not study group differences, but that we serve the public better when we think deeply about the ways that we implicitly frame questions about whose behavior is the default standard norm and whose is made the subject of psychological scrutiny.
0 to 2 “NO! IT GET ME!”
A young baby’s world revolves around her own experiences. Those experiences are dominated by physical sensations, such as a gas bubble or a soft blanket, with blurred distinctions between herself and the rest of the world. She lives in the moment. For example, 4-month-old Jessica is fascinated by a toy her teacher is holding. She stares at it intently. Yet, when the toy is dropped out of view, Jessica doesn’t look down to find it. She simply looks at another object that is in her direct line of sight. Her behavior implies, “I see the toy, therefore it exists. I don’t see the toy and it doesn’t.” Her worldview is a series of images based on her own experiences rather than a sequence of logical events. (We may perhaps see this persist as “inattention” and novelty-seeking in older children and neotenic adults)
Moments of Magical Thinking
By 12 months, an infant’s thinking becomes more rooted in the reality that objects and people remain the same even when out of sight. This concept of object permanence, along with an expanding memory, makes the baby’s life a bit more predictable. But, she still often misinterprets reality. For instance, 1-year-old Jemima voices displeasure and is frightened when a toy unexpectedly rolls just a few inches toward her. The world is a mystical place, and babies have a fragile understanding of the difference between animate and inanimate objects. (American culture promotes this confusion: “entertainment” aimed at children and adults is saturated with just this infantile perception and presentation of reality.)
Seeing is Believing
When working with toddlers, it’s important to remember that they will make connections that are illogical and frustrating. (Neotenic social typical adults continue to produce “magical” connections as explanations for any and all phenomenon. This literally is what drives Asperger types “crazy” when interacting socially.)
No amount of reassurance (or factual information) is going to immediately convince 16-month-old Ashley (or neotenic adults) that she can’t slip down the bathtub drain like the sliver of soap just did. In cases such as this, you can recommend to parents that they temporarily let the toddler bathe standing up-supporting her while she stands on a safety mat fastened to the tub’s surface. They can reassure her that she is too big to go down the drain and that they will keep her safe. It’s important to respect toddlers’ fears and to understand that, for them, it is often the case that seeing is believing. “The soap slipped down the drain, so I can, too.”
Moving Toward Abstract Thinking
At around 18 months, emerging language and long-term memory pull toddlers out of the purely sensory world into more complex, abstract thinking. They begin to grasp concepts such as cause and effect. (Cause and effect is almost impossible for many adults to comprehend; their “development” is stuck at this stage – “culture” and social pressure either “affirm” faulty designation of cause as “magical-supernatural” and/or fail to “teach” and develop reasoning skills) Difficulties begin because their reasoning, which seems quite logical to them, has little connection with reality. For example, 20-month-old Jason spills a small amount of juice on the table just before a baby in the room lets out a piercing cry. Jason’s expression becomes very sad and serious. We can’t know for sure – that’s the challenge of caring for preverbal children, but Jason may think his accidental action caused the baby to cry.
A thriving 2-year-old is a busy scientist actively exploring and creating his own theories about how things work. Julian loves to turn lights on and off. Does he think it is his fingertip that magically creates light and dark? Or, is it the blinking of his eyes that he does each time he flicks the switch? Two-year-olds do not have enough information about the world yet to draw reasonable conclusions. (American education fails to supply information about reality – math, science, nature – it confirms the infantile belief that reality is created by “emotional demand” and by spells, chants, rituals – consumerism, “brand” shopping, “free” money-credit – that will “magically” fulfill narcissistic focus. Narcissism is necessary to infants; in adults it is destructive.)
Stage by Stage 0 – 2
Stage by Stage 3 – 4
Stage by Stage 5 – 6
Comment: Emotions ARE SOCIALLY CONSTRUCTED when learning verbal language. See: https://aspergerhuman.wordpress.com/2015/12/03/empathy-and-emotion-are-words-that-describe-pain/
The Chronicle Review of Higher Education
(Bold highlights are mine)
By Lisa Feldman Barrett March 05, 2017
On a brisk fall day in 2006, I was sitting on the floor of my former office in the Boston College psychology department, weeding through boxes of old journal articles on the science of emotion. As I perched in the center of a pile, I came across a tattered paper by a psychologist named Elizabeth Duffy, dated 1957, titled “The Psychological Significance of the Concept of Arousal or Activation.” I vaguely remembered reading it in graduate school, but the details were foggy. Probably worth rereading, I thought, and spared it from the recycling bin.
I had no idea that this action would lead me to unearth two major errors in psychology and a half-century of lost research.
Before I can tell you that story, you’ll need to understand how the science of emotion came to be. Most scientists who study it would relate a history roughly like this:
Once upon a time, people believed that the human mind was bestowed by gods or God. Emotions, in contrast, were said to live within the body, like an inner beast that needed to be controlled by divine, rational thought. In the 19th century, Charles Darwin replaced God with natural selection, and shortly thereafter, psychology was born. A golden age of emotion research began, as neurologists and physiologists searched for the physical basis of emotions. They discovered that emotions live in ancient parts of the brain that control the body: the mythical “inner beast” made real. These scientists’ triumph was short-lived, however, as the science of emotion soon plunged into a “dark ages.” Psychology fell prey to a scourge known as behaviorism, the study of pure behavior, in which intangibles like thoughts and feelings were deemed unmeasurable and therefore irrelevant to science. Nothing worthwhile was published on emotions for half a century.Then the cognitive revolution arrived, in the 1960s, rescuing psychology from the darkness, and the science of emotion experienced a renaissance. Emotions were discovered once and for all to have distinct and universal facial expressions, bodily patterns, and brain circuitry, and we all lived happily ever after.
Pick up any psychology textbook or read Wikipedia, and you’ll see some variation of that story: that emotions are inherited through natural selection and located in specific parts of the brain that trigger distinct reactions — the “fingerprints” of emotion — in the face and body. See a snake slither across your path, for example, and a “fear circuit” is said to cause your heart to race, your eyes to widen, your voice to shriek. If you’ve ever heard that emotions live in a “limbic system” in the brain, that you have a “lizard brain” that triggers your emotions, or that fear lives in a region called the amygdala, those ideas are rooted in the same story. So is the movie Inside Out, a children’s fantasy about emotions as individual characters in the brain, which was described by National Public Radio as “remarkably true to what scientists have learned about the mind, emotion, and memory.”
The story of how we came to the classical view of emotion has influenced generations of scientists, educated millions of students, and set the course of psychological research for decades. But it’s a fiction. The details about Darwin, the dark ages of behaviorism, and the subsequent rescue and renaissance bear only a passing resemblance to the facts. That’s what Elizabeth Duffy’s paper was about to teach me.
An extensive body of research points to a wholly different view of what emotions are. They are not caused by dedicated brain circuits that, in certain circumstances, flip on and make you feel and move a particular way. Rather, emotions are whole-brain affairs. Happiness, surprise, anger, and the rest are constructed in the moment by general-purpose systems throughout the brain, the same systems that create thoughts, memories, sights, sounds, smells, and other mental phenomena. The name for this alternative view is “construction,” and my particular approach is called the theory of constructed emotion.
Construction eschews “fingerprints” and points out the variety of emotion in real life. In anger, your heart rate might go up, go down, or stay the same. Your eyes might widen, narrow, or close. The so-called fingerprints of emotion, like a grimace and elevated blood pressure for anger, are merely cultural stereotypes. They are reinforced by popular TV shows like Daredevil and Lie to Me, in which people’s innermost thoughts and feelings are revealed by facial movements and heartbeats. My lab has copious data showing that emotions have no consistent patterns in the face, body, and brain, however, including a meta-analysis of 22,000 test subjects across more than 220 studies of peripheral physiological changes during emotion, and another meta-analysis of every published neuroimaging study of emotion.
I had begun graduate school believing in the classical view of emotion and its dignified history. By the time I encountered Elizabeth Duffy’s paper, I’d been publishing about construction for several years. However, I still believed the part about behaviorism, when nothing much happened in emotion research from about 1910 to 1960. Behaviorism redefined emotions as observable behaviors: Fear was defined as freezing in place; happiness as a tasty treat at the end of a maze. Many psychologists today consider the period of behaviorism to be scientifically bankrupt, producing little knowledge of any value about the human mind.
Reading Duffy’s paper, what caught my eye was the list of references at the end. Two of them, from the 1930s and ’40s, also written by Duffy, were unknown to me, which was odd because their titles sounded remarkably relevant to my research. When I tracked them down, I was dumbfounded. Duffy was making exactly the same points that I had made in a recent paper, questioning whether the scientific evidence on emotion really supports the classical view. But she’d done it 70 years earlier, when supposedly nobody was studying such things.
Her two papers were clearly crucial to the field. Why hadn’t I heard of them? Back in my office, I searched and located a few authors who had cited Duffy here and there over the past 60 years, but for the most part, the field had overlooked her.
I had stumbled onto a mystery. But I didn’t know how big it was going to get.
Duffy’s references led me to several other unfamiliar papers that tried in vain to locate emotion fingerprints. Unlike behaviorists, these researchers weren’t saying that emotions don’t exist. They were running experiments to find physical markers of distinct emotions, failing to do so, concluding that the classical view was unjustified, and speculating about what would later be called construction.
The list of references kept growing, and soon I had more than a dozen of these mystery papers, enough to make me wonder what the hell was going on. Together with one of my sharpest graduate students, I hunted for more papers in earnest and started buying rare, used psychology texts online. My husband was bemused by the steady stream of small packages from Amazon and the timeworn books inside them. We bought another bookcase. Then another.
Little by little, I headed backward in time. From Duffy and her peers in the 1930s and ’40s, to a trove of obscure work dating back to the turn of the century, and then to textbooks on emotion written in the mid to late 1800s. My new bookshelves creaked. I was looking at a mountain of research that was critical of the classical view: more than 100 little-known works spanning at least five decades.
Once I’d reached back into the 1800s, I turned to the work of luminaries in the field of emotion, including Charles Darwin and William James, that I’d last encountered in graduate school. This time around, rather than read bits and pieces or interpretations by other scholars, I pored over the original books in their entirety. They were eye-opening in ways I had not expected.
First up was Darwin’s The Expression of the Emotions in Man and Animals, which has been lauded for more than a century for demonstrating that facial expressions are useful and functional products of natural selection. I was stunned to discover that the book says nothing of the sort. Natural selection is barely mentioned, and Darwin never claims that facial expressions are functional. Quite the opposite: He repeatedly calls them vestigial and “purposeless”! Virtually everyone in my field, for reasons unknown, was citing Darwin’s ideas on emotional expressions inaccurately.
After Darwin, I reread William James, considered a father of modern psychology. James is widely known for saying that every type of emotion has a distinct fingerprint in the body. You can find this claim about James in undergraduate textbooks, in scholarly papers, and in best sellers. And yet, the more James I read in the original, the less plausible the claim became. A whole section in his classic Principles of Psychology, Volume 2, is titled “No Special Brain-Centres for Emotion.” And I kept encountering criticisms of the idea of emotion fingerprints, such as “ ‘Fear’ of getting wet is not the same fear as fear of a bear” (in “The Physical Basis of Emotion“). Ultimately, I discovered that James had been wildly misinterpreted. He never said that every type of emotion has a distinct bodily state. He said every instance of emotion may have a distinct bodily state — in other words, variety is the norm. That is the opposite of a fingerprint.
After some research, I uncovered how Darwin’s and James’s words had become twisted into these alternative meanings. In both cases, other scientists had reinterpreted the original text, and their modifications were wrongly attributed back to Darwin and James. Each mistake has endured for a century, becoming a firm yet false basis of the classical view of emotion, misleading generations of students, and wasting billions of dollars of research money in search of emotion fingerprints.
My findings implied an entirely different history of emotion research, one that is not kind to the classical view. Darwin and James could no longer be seen as the foundation of this view, and the so-called dark ages had actually been a period of tremendous innovation and evidence against the view.
So, how did these errors and oversights happen? Were 50 years’ worth of research papers accidentally overlooked, actively ignored, or intentionally suppressed? As with most historical events, there’s probably more than one cause.
A first possibility is that the “dark ages” of emotion never existed. What people call “history” is just a representation of the past that helps make sense of the present. People are creative historians who craft a story somewhere between fact and fiction. (Therapists know this, as does anyone who has tried online dating.) The history of scientific ideas is no exception.
One example is the “flat earth” myth. Students today learn that people of the Middle Ages thought the world was flat, and that Columbus set sail to prove it round. But that history is not true. The myth was propagated in the early 19th century to embellish a story about how the Age of Reason (science) triumphed over the ignorance of faith (religion).
Scientific progress sounds more impressive when it’s portrayed as a beacon of light suddenly appearing after decades or centuries of darkness, when in actuality those ideas have been around for ages. It’s possible that in a similar manner, the so-called dark ages of emotion research were manufactured to make the “renaissance” of the classical view viable.
A more mundane possibility is that the ideas of Duffy and her colleagues never took root because they did not offer a fully formed alternative model to compete with the classical view. They had a critique of the dominant scientific view, but dissent alone was not enough to remain relevant. As the philosopher Thomas Kuhn wrote about the structure of scientific revolutions: “Because there is no such thing as research in the absence of a paradigm, to reject one paradigm without simultaneously substituting another is to reject science itself.”
But the most likely reason that the classical view persisted, I believe, is that it’s not just a view of emotion. It also represents a compelling story of what it means to be a human being. It says that you are an animal at the core, at the mercy of automatic emotions that you regulate by that most human of abilities, rational thought. This view of human nature is deeply embedded in society. It’s in the legal system, which distinguishes between calculated crimes, such as first-degree murder, and crimes of passion, in which your emotions “take you over” and you are partially absolved of responsibility. It’s in economics, forming the foundation of theories about rational and irrational investors. It’s in health care, as autistic children are taught stereotypical facial poses ostensibly to help them recognize emotions in others. It’s in stereotypes of men versus women, in which women are believed to be innately more emotional than men.
Construction theories of emotion are an ambassador for an entirely different view of human nature. Your mind cannot be a battleground between animalistic emotions and rational thoughts, because the brain has no separate systems for emotion and cognition. Instances of both are constructed by the same set of brainwide networks working collaboratively. Scientists didn’t know this in Elizabeth Duffy’s time, but modern neuroscience has confirmed it.
In addition, the classical view of human nature, with its tale of ancient emotion circuits robed in rationality, depicts humankind as the pinnacle of evolution. Construction uncomfortably dislodges us from this honored position. Yes, we’re the only animal that can design nuclear reactors, but other creatures eat our lunch when it comes to other abilities, like remembering fine details (a strength of the chimpanzee brain) or even adapting to new situations (where bacteria reign supreme). Natural selection did not aim itself toward us — we’re just an interesting sort of animal with particular adaptations that helped us survive and reproduce. Construction teaches us that our brain is not more highly evolved, just differently evolved. That’s a humbling message to swallow in Duffy’s time and in ours.
We might never know why 50 years of research fell off the map. What is most important is to rediscover what was lost. Today we can peer harmlessly into a living human brain, and we have computers to gather and process data. It’s pretty clear that emotions are constructed, not lurking in dedicated brain circuits. At long last, we are on a scientific path marked by the data, rather than ideology, to understand emotion and ourselves.
Lisa Feldman Barrett is a professor of psychology at Northeastern University and the author of How Emotions Are Made: The Secret Life of the Brain (Houghton Mifflin Harcourt) published this month.
American Academy of Pediatrics November 1998, VOLUME 102 / ISSUE Supplement E1 (Click here for extensive list of articles on child development)
Excerpt: The relation of language and emotion in development is most often thought about in terms of how language describes emotional experiences with words that name different feelings. Not surprisingly, therefore, developmental studies of emotion and language typically have described how children acquire emotion labels, such as “mad,” “happy,” “scared.”1–3 However, children typically do not begin to use these words until language development is well underway, at approximately 2 years of age. Other studies have described how caregivers use emotion words when talking to their infants in the first year. Caregivers are very good, almost from the beginning, at attributing particular emotions to a young infant’s cries, whines, whimpers, smiles, and laughs, for example, “what a happy baby,” “don’t be so sad,” “are you angry?”4,,5 However, once infants begin to learn language, mothers are far less likely to name a child’s emotion than to talk about the situations and reasons for the child’s feelings and what might be done about them.6,,7
This research emphasis on the words that name emotions has at least these two limitations. First, the number of emotion words in the dictionary is small —at most, a few dozen terms for emotions and feeling states—compared with the enormous number of names in a dictionary for objects and actions. Second, the emotional expressions of infants and young children generally are transparent in their emotional meaning. Thus, the label for an emotion is very often redundant with its expression and adds no new information. Given the relatively small number of words for naming feelings and emotions, and the redundancy between emotion words and the expressions they name, understanding how emotion and language are related in early development requires looking beyond just acquisition of specific emotion words.
The core of development that brings an infant to the threshold of language in the second year of life is the convergence of emotion, cognition, and social connectedness to other persons.8,,9 Children learn language initially because they strive to connect with other persons to share what they are feeling and thinking. When language begins toward the end of the first year, infants have had a year of learning about the world. The results of their cognitive developments have given children contents of mind—beliefs, desires, and feelings—that have to be expressed because they are increasingly elaborated and discrepant from what other persons can see and hear in the context. Language expresses and articulates the elements, roles, and relationships in mental meanings in a way that a child’s smiles, cries, frowns, and whines cannot. Language, then, emerges in the second year out of a nexus of developments in emotion, social connectedness, and cognition.
For the past 10 years, I have been studying how language comes together with the cognitive, emotional, and social developments of the first 3 years of life,8 with the basic assumption that language acquisition is tied to other developments in a child’s life. The knowledge we set out to explain was language: how children learn words in the second year and then learn to combine words for phrases and simple sentences in the beginning of the third year. Early words are fragile, imprecise, and emerge tentatively at the same time that emotional expressions are robust, frequent, and fully functional. We asked, therefore, how these two systems of expression—emotion and language—come together in the second year of a child’s development. We looked at both the content of developments in emotional expression and language as well as at the process of their interaction.
The model of development that guided our research (Fig 1) built on the link between two well-known concepts in psychology: engagement and effort. Knowledge of language is represented here by the tripartite model of language that Peg Lahey and I introduced 20 years ago. Linguistic form—sounds, words, and syntax—is only part of language, albeit the part that attracts the most attention. Form necessarily interacts with content, or meaning, because language is always about something. And form and content interact with the pragmatics of language use: language is used in different situations, for different purposes and functions. Only one or the other of these components, notably form alone, cannot by itself be a language. Rather, language is, necessarily, the convergence of content, form, and use.10
Many questions about the complex developmental relationship between language and emotion remain for additional research, but our findings provide some insight into the effort and engagement required by both language learning and emotional expression. We propose that the heart of language acquisition is in the dialectic tension between the two psychological components of effort and engagement (Fig 1).
To begin with, a language will never be acquired without engagement in a world of persons, objects, and events—the world that language is about and in which language is used. The concept of engagement embraces the social, affective, and emotional factors that figure into language learning. Other persons and the social context are required, because the motivation for learning a language is to express and interpret contents of mind so that child and others can share what each is thinking and feeling (the principle of discrepancy).
Affect and emotional expression are required for establishing inter-subjectivity and sharing between child and caregiver before language and also for motivating a child’s attention and involvement with people, objects, and events for learning language. The relevance of adult behavior is ensured when adults tune into what a child is feeling and thinking.
Asperger comment: If the caregiver is ONLY INTERESTED in his or her own expectations of what “ought to be” going on in the child’s mind, and rejects or ignores what the the child is feeling and thinking, then this “motivation” for learning and using language may be blunted or severely damaged.
Language is learned when the words a child hears are about the objects of engagement, interest, and feelings—about what the child has in mind (the principle of relevance). In turn, children use the language they are learning for talking about the things they care about—the objects of their engagement.
Asperger comment: Ridiculing an ASD or Asperger child’s interests, which is what happens consistently (the train schedule cliché); cutting the child off in conversation, and angry responses to “stupid topics that no one wants to hear about” guarantees feelings of shame, rejection and withdrawal from social interaction.
Acquiring language requires effort, first, for setting up the meanings consciousness that language expresses or that results from interpreting the expressions of others. Second, additional effort is required for learning the increasingly complex language needed to express and articulate the increasingly elaborated mental meanings that are made possible by developments in cognition (the principle of elaboration). And third, effort also is required for coordinating different kinds of behaviors—such as talking, expressing emotion, and playing with objects (as described by Bloom and associates11)—that make up the ordinary activities of a young child’s life. Neither speech nor emotional expression occurs in isolation; they are always and necessarily embedded in complex events.
In summary, language and emotion are related in complex ways in the process of development. Language is created by a child in the dynamic contexts and circumstances that make up the child’s world, and acquiring a language requires both engagement and effort. A child’s feelings and emotions are central to engagement with the personal and physical world and determine the relevance of language for learning. And the effect of the effort needed to coordinate cognitive, emotional, and linguistic resources for learning language is to recruit states of neutral affect for attention and processing. Children who began to learn words early spent more time in neutral affect (the Asperger “Little Professor” label?), whereas children who learned words somewhat later expressed more emotion instead. Effort also was apparent in the timing relation of speech and emotional expression at the transition to sentences, especially for the later language learners.
By the time language begins, toward the end of the first year, emotional expression already is well-established and children do not need to learn the names of the emotions to tell other people what they are feeling. But they do need to learn the language to tell other people what their feelings are.
Asperger comment: However, the constant “indoctrination” as to which feelings are socially approved, and which are socially forbidden, denies the child expression of “negative” emotion – expression that is necessary if children are to learn how to “deal with” inevitable feelings of anger, frustration and discord between people. This is especially true for male children and developmentally diverse children, who are literally “shut down” by adult disapproval of their interests and feelings.
Language does not replace emotional expression. Rather, children learn language for expressing and articulating the objects and circumstances of their emotional experiences while they continue to express emotion with displays of positive and negative affective tone.
American “emotional intelligence” never gets past this judgmental social view of emotion! Americans are so consumed by the “power” of words, that we honestly believe that banning the use of “bad words” magically turns anger into love and racism into equality. This is the root of politically correct policing of language. All it does is prevent any serious discussion about conditions that very much need serious discussion.
If only we taught children that emotions are fleeting physical reactions – and that rather than banning socially proscribed emotions, this “stop and think” step below, which can be learned, is a path to emotional maturity. Children need to understand our ability to choose how to handle all types of emotion. But! Americans are addicted to anger, rage and violence…
Autism affects multiple aspects of the cerebral anatomy, which makes its neuroanatomical correlates inherently difficult to describe. Here, we used a multiparameter classification approach to characterize the complex and subtle gray matter differences in adults with ASD. SVM achieved good separation between groups, and revealed spatially distributed and largely non-overlapping patterns of regions with highest classification weights for each of five morphological features. Our results confirm that the neuroanatomy of ASD is truly multidimensional affecting multiple neural systems. The discriminating patterns detected using SVM may help further exploration of the genetic and neuropathological underpinnings of ASD.
There is good evidence to suggest that several aspects of cerebral morphology are implemented in ASD—including both volumetric and geometric features (Levitt et al., 2003; Nordahl et al., 2007). However, these are normally explored in isolation. Here, we aimed to establish a framework for multiparameter image classification to describe differences in gray matter neuroanatomy in autism in multiple dimensions, and to explore the predictive power of individual parameters for group membership. This was achieved using a multiparameter classifier incorporating volumetric and geometric features at each cerebral vertex. In the left hemisphere, SVM correctly classified 85.0% of all cases overall at a sensitivity and specificity as high as 90.0% and 80.0%, respectively, using all five morphological features. This level of sensitivity compares well with behaviorally guided diagnostic tools whose accuracies are on average ∼80%. Naturally, one would expect lower sensitivity values than the test used for defining the “autistic prototype” itself (i.e., ADI-R). Thus, if a classifier is trained on the basis of true positives identified by diagnostic tools, the maximal classification accuracy that could be reached is only as good as the measurements used to identify true positives.
The significant predictive value of pattern classification approaches may have potential clinical applications. Currently, ASD is diagnosed solely on the basis of behavioral criteria. The behavioral diagnosis is however often time consuming and can be problematic, particularly in adults. Also, different biological etiologies might result in the same behavioral phenotype [the “autisms” (Geschwind, 2007)], which is undetectable using behavioral measures alone. Thus, the existence of an ASD biomarker such as brain anatomy might be useful to facilitate and guide the behavioral diagnosis. This would, however, require further extensive exploration in the clinical setting, particularly with regards to classifier specificity to ASD rather than neurodevelopmental conditions in general.
To address the issue of clinical specificity, the established ASD classifier was used to classify individuals with ADHD—a neurodevelopmental control group. Bilaterally, the ASD classifier did not allocate the majority of ADHD subjects to the ASD category. This indicates that it does not perform equally well for other neurodevelopmental conditions, and is more specific to ASD. To further demonstrate that the classification is driven by autistic symptoms, the test margins of individuals with ASD were correlated with measures of symptom severity (Ecker et al., 2010). We found that larger margins were associated with more severe impairments in the social and communication domain of the ADI-R. The classifier therefore seems to use neuroanatomical information specifically related to ASD rather than simply reflecting nonspecific effects introduced by any kind of pathology. However, due to a recent scanner upgrade, ADHD scans were acquired with different acquisition parameters, while manufacturer, field strength, and pulse sequence remained the same. FreeSurfer has been demonstrated to show good test–retest reliability particularly within scanner-manufacturer and field strength (Han et al., 2006), but we cannot exclude the possibility that systematic differences in regional contrast may have affected the ADHD classification. Future research is thus needed to validate the ADHD findings on an independent sample.
The overall classification accuracy varied across hemispheres (79.0% left vs 65.0% right) in the absence of interhemispheric differences in parameter variability. Hemisphere laterality is an area, which remains relatively unexplored in autism. While our data suggest that the left hemisphere is better at discriminating between groups (i.e., is more “abnormal”), it is unclear whether this discrepancy is due to quantitative differences in parameters or to qualitative aspects of the discriminating patterns (i.e., additional regions). Furthermore, it is also not possible to identify whether individuals with ASD display a higher (lower) degree of cortical asymmetry relative to controls. There is some evidence to suggest that individuals with ASD show a lower degree of “leftward” (i.e., left > right) cortical symmetry than controls (Herbert et al., 2005), which may explain differences in classification accuracy. There is also evidence to suggest that the left hemisphere is under tighter genetic control than the right hemisphere (Thompson et al., 2001), which may be of relevance to a highly heritable condition such as ASD. However, a direct numerical comparison between hemispheres is needed to address this issue directly.
The classification accuracy not only varied across hemispheres but also across morphometric parameters. Bilaterally, cortical thickness provided the best classification accuracy and highest regional weights. Differences in cortical thickness have been reported previously in ASD for both increases (Chung et al., 2005; Hardan et al., 2006) as well as decreases (Chung et al., 2005; Hadjikhani et al., 2006), and in similar regions as reported here (i.e., parietal, temporal, and frontal areas). The overlap with previous studies indicates that these regions display high classification weights due to a quantitative (i.e., “true”) difference rather than high intercorrelations with thickness measures in other brain regions.
Certain geometric features such as average convexity and metric distortion provided above chance classifications as well, particularly in parietal, temporal, and frontal regions, and in areas of the cingulum. Average convexity and metric distortion measure different aspects of cortical geometry (see Materials and Methods) and have previously been linked to ASD, as has sulcal depth (Nordahl et al., 2007). Such geometric features were suggested to reflect abnormal patterns of cortical connectivity. There have also been reports of abnormal patterns of gyrification (Piven et al., 1990; Hardan et al., 2004) and large-scale displacements of the major sulci (Levitt et al., 2003). Thus, our study provides further evidence to support the hypothesis that the “autistic brain” is not just bigger or smaller but is also abnormally shaped.
While we demonstrated that the neuroanatomy of ASD is multidimensional, the etiology of such multivariate differences remains unclear. Here, little or no spatial overlap was observed between the discriminating patterns for individual parameters. Such region dependency was also observed in the regional morphometric profiles displaying the distribution of weights across multiple cortical features in a region of interest. If one assumes that different cortical features reflect different neuropathological processes, such region- and parameter-dependent variations may reflect the multifactorial etiology of ASD. For example, evidence suggests that cortical thickness and surface area reflect different neurobiological processes and are associated with different genetic mechanisms (Panizzon et al., 2009). Cortical thickness is likely to reflect dendritic arborization (Huttenlocher, 1990) or changing myelination at the gray/white matter interface (Sowell et al., 2004). In contrast, surface area is influenced by the division of progenitor cells in the embryological periventricular area, and is associated with the number of minicolumns (Rakic, 1988). Instead, geometric differences are predominantly linked with the development of neuronal connections and cortical pattern of connectivity, and are thus a marker for cerebral development (Armstrong et al., 1995; Van Essen, 1997). It is therefore likely that the reported maps reflect multiple genetic and/or neurobiological etiologies, which need further investigation. Thus, our findings should be interpreted in the context of a number of methodological limitations.
First, the classification algorithm is highly specific to the particular sample used for “training” the classifier, namely high-functioning adults with ASD. The advantage of this approach is that the classifier offers high specificity with regard to this particular subject group, but is less specific to other cohorts on the spectrum. Due to the small sample size, it was also not possible to reliably investigate differences between high-functioning autism and Asperger’s syndrome. Evidence (Howlin, 2003) suggests that by adulthood these groups are largely indistinguishable at the phenotypic level. However, the extent to which these groups differ at the level of brain anatomy is unknown, and may be investigating using SVM in the future. Second, 85% of ASD participants in our sample were diagnosed using the ADI-R, and 15% were diagnosed using the ADOS. As both diagnostic tools measure autistic symptoms at different developmental stages, the classifier may be biased toward individuals with an early diagnosis of ASD. Although it is not expected that classifier performance on the basis of ADOS and ADI differ drastically, diagnostic heterogeneity may be a potential limitation. Last, SVM is a multivariate technique and hence offers a limited degree of interpretability of specific network components. Additional analysis such as “searchlight” or “virtual lesions” approaches (Averbeck et al., 2006; Kriegeskorte et al., 2006; Pessoa and Padmala, 2007) may therefore be combined with SVM in the future to establish the relative contribution of individual regions/parameters to the overall classification performance.
Nevertheless, while classification values and specific patterns we report must be considered as preliminary, our study offers a “proof of concept” for describing the complex multidimensional gray matter differences in ASD.
by Jann Ingmire, Medical Express.com
Neuroscience research demonstrates that the brain regions underpinning moral judgment share resources with circuits controlling other capacities such as emotional saliency, mental state understanding and decision-making. Credit: Jean Decety
Psychologists have found that some individuals react more strongly than others to situations that invoke a sense of justice—for example, seeing a person being treated unfairly or mercifully. The new study used brain scans to analyze the thought processes of people with high “justice sensitivity.” (One of the “negative symptoms” of Asperger’s is an innate concern for justice, fair play and honesty.)
“We were interested to examine how individual differences about justice and fairness are represented in the brain to better understand the contribution of emotion and cognition in moral judgment,” explained lead author Jean Decety, the Irving B. Harris Professor of Psychology and Psychiatry.
Using a functional magnetic resonance imaging (fMRI) brain-scanning device, the team studied what happened in the participants’ brains as they judged videos depicting behavior that was morally good or bad. For example, they saw a person put money in a beggar’s cup or kick the beggar’s cup away. The participants were asked to rate on a scale how much they would blame or praise the actor seen in the video. People in the study also completed questionnaires that assessed cognitive and emotional empathy, as well as their justice sensitivity.
As expected, study participants who scored high on the justice sensitivity questionnaire assigned significantly more blame when they were evaluating scenes of harm, Decety said. They also registered more praise for scenes showing a person helping another individual.
Keith J. Yoder and Jean Decety
In the past decade, a flurry of empirical and theoretical research on morality and empathy has taken place, and interest and usage in the media and the public arena have increased. At times, in both popular culture and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that although there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish among the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology as well as developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used and, perhaps, abandoning the muddy concept of empathy.
Fresh morning; at an elevation of 6100′ our desert cools at night, every night. Hot summer days are balanced by magical evenings of long low light that seem to be unreal, like living in a painting. Right now though, our first week of temps in the 70s is predicted, which doesn’t mean that we won’t get a snow squall sometime in June.
A type of “anticipatory anxiety” surrounds the change into “real summer” – and the subtle emotional interference of knowing that “summer is brief” adds a biting dread. The West is not at all like those “homey” green and tree-enclosed environments of the Midwest, where I grew up, and which I fled, like a migrating salmon, whale or xxxpulled by a magnetic tether or a few molecules of home waters mingled in the currents.
Unlike many rural locations, those born here tend to stay – leaving for college, the military or a ‘job’ in a big city but returning eventually, like the salmon, to raise a family, quietly, automatically, reflexively, but without an economic rebound in the usual “boom and bust” cycle of extractive industries, and due to simple demographics of the baby boomers, and without much immigration, the town is ageing rapidly. “We” old folks ought to move on to sunny winter havens; some do, but most of us are glued to this place by knowledge; by experience in the “outer world” – the U.S. one sees on TV. Frantic, insane, polarized. It’s as if no one remembers that they are human; technology drives millions, like sardines or anchovies in a bait ball, that “supposedly” offers “safety in numbers” – a temporary illusion.
The bait ball – A “visual analogy” for modern social urban environments…
A small number of Homo sapiens “stay” in our Wyoming desert instead of migrating through the area as Native Americans and trappers did, thanks to an assist from geography and technology: a year-round supply of river water flowing out of mountains to the north, the transcontinental railway, and the Interstate system. Unlike snow monkeys, we don’t drink our own poo, thanks to a water processing system. And thankfully, we’re not “cute” and don’t attract unwanted attention.
Many people are not aware that this hot spring lifestyle is a recent cultural behavior that the snow monkeys copied from humans.
Sometimes instinct is the Mother of Invention… Salmon whose migration route is disturbed by flooding exploit the challenge as an opportunity.
But…escape one predator, get caught by another…
Instinct – a very good thing, fundamental to animal life. Social humans don’t see it this way. Intuitive visual thinkers trust instinct; intuitive “messages” are instinct at work.