Emotions: What a Mess! / Physiology, Supernatural Mental State, Words

This drives me “nuts” – emotions ARE physiological responses to the environment; and yet, psychologists (and other sinners) continue to conceive of emotions as “mental or psychological states” and “word objects” that exist somewhere “inside” humans, like colored jelly beans in jar, waiting to be “called on” by their “names”. Worse, other “scientists hah-hah” also continue to confuse “physiology” as arising from some abstract construct or supernatural domain (NT thingie) called emotion.

Physiological Changes Associated with Emotion

https://www.ncbi.nlm.nih.gov/books/NBK10829/

The most obvious signs of emotional arousal involve changes in the activity of the visceral motor (autonomic) system (see Chapter 21). Thus, increases or decreases in heart rate, cutaneous blood flow (blushing or turning pale), piloerection, sweating, and gastrointestinal motility can all accompany various emotions. These responses are brought about by changes in activity in the sympathetic, parasympathetic, and enteric components of the visceral motor system, which govern smooth muscle, cardiac muscle, and glands throughout the body. (This is obviously real physical activity of the body, and not a magical, psychological or mental “state”) As discussed in Chapter 21, Walter Cannon argued that intense activity of the sympathetic division of the visceral motor system prepares the animal to fully utilize metabolic and other resources in challenging or threatening situations.

Honestly? I think in the above we have a working description of the ASD / Asperger “emotional” system: NO WORDS. So-called “emotions” are a SOCIALLY GENERATED SYSTEM that utilizes language to EXTERNALLY REGULATE human “reactivity” – that is, the child learns to IDENTIFY it’s physiological response with the vocabulary supplied to it by parents, teachers, other adults and by overhearing human conversation, in which it is immersed from birth.

Conversely, activity of the parasympathetic division (and the enteric division) promotes a building up of metabolic reserves. Cannon further suggested that the natural opposition of the expenditure and storage of resources is reflected in a parallel opposition of the emotions associated with these different physiological states. As Cannon pointed out, “The desire for food and drink, the relish of taking them, all the pleasures of the table are naught in the presence of anger or great anxiety.” (This is the physiological state that ASD / Asperger children “exist in” when having to negotiate the “world of social typicals” The social environment is confusing, frustrating, and alien. Asking us “how we feel” in such a circumstance will produce a “pure” physiological response: anxiety, fear, and the overwhelming desire to escape.)

Activation of the visceral motor system, particularly the sympathetic division, was long considered an all-or-nothing process. Once effective stimuli engaged the system, it was argued, a widespread discharge of all of its components ensued. More recent studies have shown that the responses of the autonomic nervous system are actually quite specific, with different patterns of activation characterizing different situations and their associated emotional states. (What is an emotional state? Emotion words are not emotions: they are language used to parse, identify and “name” the physiologic arousal AS SOCIETY  DICTATES TO BE ACCEPTABLE) Indeed, emotion-specific expressions produced voluntarily can elicit distinct patterns of autonomic activity. For example, if subjects are given muscle-by-muscle instructions that result in facial expressions recognizable as anger, disgust, fear, happiness, sadness, or surprise without being told which emotion they are simulating, each pattern of facial muscle activity is accompanied by specific and reproducible differences in visceral motor activity (as measured by indices such as heart rate, skin conductance, and skin temperature). Moreover, autonomic responses are strongest when the facial expressions are judged to most closely resemble actual emotional expression and are often accompanied by the subjective experience of that emotion! One interpretation of these findings is that when voluntary facial expressions are produced, signals in the brain engage not only the motor cortex but also some of the circuits that produce emotional states. Perhaps this relationship helps explain how good actors can be so convincing. Nevertheless, we are quite adept at recognizing the difference between a contrived facial expression and the spontaneous smile that accompanies a pleasant emotional state. (Since modern humans are notoriously “gullible” to the false words, body language and manipulations of “con men” of all types, how can this claim be extended outside a controlled “experiment” in THE LAB? Having worked in advertising for 15 years, I can assure the reader that finding models and actors who could act, speak and use body language that was “fake but natural” was a constant challenge. In other words, what was needed was a person who could “fake” natural behavior. Fooling the consumer was the GOAL!)

This evidence, along with many other observations, indicates that one source of emotion is sensory drive from muscles and internal organs. This input forms the sensory limb of reflex circuitry that allows rapid physiological changes in response to altered conditions. However, physiological responses can also be elicited by complex and idiosyncratic stimuli mediated by the forebrain. For example, an anticipated tryst with a lover, a suspenseful episode in a novel or film, stirring patriotic or religious music, or dishonest accusations can all lead to autonomicactivation and strongly felt emotions. (Are these “events, anticipated or actualized”, not social constructs that are learned? Would any child grow up to “behave patriotically” if he or she had not been taught do this by immersion in the total social environment, which “indoctrinates” children in the “proper emotions” of the culture?) The neural activity evoked by such complex stimuli is relayed from the forebrain to autonomic and somatic motor nuclei via the hypothalamus and brainstem reticular formation, the major structures that coordinate the expression of emotional behavior (see next section). (Is exploitation of this “neural activity” not the “pathway” to training social humans to “obey” the social rules?) 

In summary, emotion and motor behavior are inextricably linked. (Why would any one think that they are not? Emotion is merely the language used to manipulate, interpret and communicate the physiology) As William James put it more than a century ago:

What kind of an emotion of fear would be left if the feeling neither of quickened heart-beats nor of shallow breathing, neither of trembling lips nor of weakened limbs, neither of goose-flesh nor of visceral stirrings, were present, it is quite impossible for me to think … I say that for us emotion dissociated from all bodily feeling is inconceivable.

William James, 1893 (Psychology: p. 379.)

NEXT: The representation of “emotions” as “thingies” that can be experienced and eaten! Are we to believe that 34,000 distinct “emotion objects” exist “in nature / in humans” or are these “inventions” of social language? 

Plutchik’s Wheel of Emotions: What is it and How to Use it in Counseling?

Can you guess how many emotions a human can experience?

The answer might shock you – it’s around 34,000.

With so many, how can one navigate the turbulent waters of emotions, its different intensities, and compositions, without getting lost?

The answer – an emotion wheel.

Through years of studying emotions, Dr. Robert Plutchik, an American psychologist, proposed that there are eight primary emotions that serve as the foundation for all others: joy, sadness, acceptance, disgust, fear, anger, surprise and anticipation. (Pollack, 2016)

This means that, while it’s impossible to fully understand all 34,000 distinguishable emotions, (what is referred to is merely “vocabulary” that humans have come up with, and not emotion thingies that exist “somewhere” -) learning how to accurately identify how each of the primary emotions is expressed within you can be empowering. It’s especially useful for moments of intense feelings when the mind is unable to remain objective as it operates from its older compartments that deal with the fight or flight response. (Watkins, 2014) (This refers to the “pop-science” theory of the additive brain, (lizard, etc) which is utter fantasy) 

This article contains:

NEXT: Some Definitions of Emotions / Rather confusing, conflicting, unsatisfying, nonspecific descriptions: – indication that we’ve entered the supernatural realm of word concepts. Aye, yai, yai!

From introductory psychology texts

Sternberg, R. In Search of the Human Mind, 2nd Ed.Harcourt, Brace, 1998 p 542 “An emotion is a feeling comprising physiological and behavioral (and possibly cognitive) reactions to internal and external events.”

Nairne, J. S. Psychology: The Adaptive Mind. 2nd Ed. Wadsworth, 2000. p. 444 ” . . . an emotion is a complex psychological event that involves a mixture of reactions: (1) a physiological response (usually arousal), (2) an expressive reaction (distinctive facial expression, body posture, or vocalization), and (3) some kind of subjective experience (internal thoughts and feelings).”

From a book in which many researchers in the field of emotion discuss their views of some basic issues in the study of emotion. (Ekman, P., & Davidson, R. J. The Nature of Emotions: Fundamental Questions. Oxford, 1994)

Panksepp, Jaak p 86. .Compared to moods, “emotions reflect the intense arousal of brain systems that strongly encourage the organism to act impulsively.”

Clore, Jerald L p 184. “. . . emotion tems refer to internal mental states that are primarily focused on affect (where “affect” simply refers to the perceived goodness or badness of something). [see Clore & Ortony (1988) in V. Hamilton et al. Cognitive Science Perspectives on Emotion and Motivation. 367-398]

Clore, Jerald L p 285-6. “If there are necessary features of emotions, feeling is a good candidate. Of all the features that emotions have in common, feeling seems the least dispensable. It is perfectly reasonable to say about ones anger, for example,’I was angry, but I didn’t do anything,’ but it would be odd to say ‘I was angry, but I didn’t feel anything.’ ”

Ellsworth, Phoebe p 192. “. . . the process of emotion . . . is initiated when one’s attention is captured by some discrepancy or change. When this happens , one’s state is different, physiologically and psychologically, from what it was before. This might be called a “state of preparedness” for an emotion . . . The process almost always begins before the name [of the emotion is known] and almost always continues after it.

Averill, James R. p 265-6. “The concept of emotion . . . refer[s] to (1) emotional syndromes, (2) emotional states, and (3) emotional reactions. An emotional syndrome is what we mean when we speak of anger, grief, fear, love and so on in the abstract. . . . For example, the syndrome of anger both describes and prescribes what a person may (or should) do when angry. An emotional state is a relatively short term, reversible (episodic) disposition to respond in a manner representative of the corresponding emotional syndrome. . . . Finally, and emotional reaction is the actual (and highly variable) set of responses manifested by an individual when in an emotional state: . . . facial expressions, physiological changes, overt behavior and subjective experience.”

LeDoux, Joseph E. p 291. “In my view, “emotions” are affectively charged, sujectively experienced states of awareness.”

 

Personal thoughts on anxiety in ASD / Asperger Types

My quest is to “untangle” the bizarre mess that “researchers” have created around ASD / Asperger’s symptoms and the “co-morbidity” of anxiety.

How difficult a question is this?

Is anxiety a “big problem” for individuals diagnosed with Asperger’s? If yes, then is it commonly “debilitating” in that it prevents the person from engaging in successful employment, satisfying relationships, and “freedom” to engage the environment by participating in activities that are important to their “happiness”?

And yet, what I encounter are articles, papers, and studies that focus on the argument over whether or not anxiety is part of ASD Asperger’s, the diagnosis, or a co-morbid condition. Anxiety, for “experts” has taken on the “power” of the Gordian knot! Honestly? This is the typical “point” at which an Asperger “looses it” and wants to simply declare that neurotypicals are idiots… but, I’m on a mission to help myself and my co-Aspergerg types to survive in social reality. We’re not going to find logical reality-based “answers” in psychology or even in neuroscience…we are on our own. 

So let’s look at anxiety, another of those words whose meaning and utility have been destroyed by neurotypical addiction to “over-generalization” and fear of specificity!

Over the past few months, I have experienced an increase in “sudden onset” panic attacks: it’s not as if I can’t assign a probable cause. The facts of my existence (age, health, financial problems) are enough to fill up and overflow whatever limit of tolerance that I can summon up each day. Severe (and sometimes debilitating) anxiety has been integral to my existence since at least age 3, which is the time of my first “remembered” meltdown. I can honestly say, that if it were not for “anxiety” manifesting as sudden meltdowns, panic attacks, “background radiation” and other physical  reactions, (who cares what they are labeled?), my life would have been far easier, with much more of my time and energy being available to “invest” in activities of choice, rather than surviving the unpredictable disruptions that I’ve had to work around. The fact that I’ve had an interesting, rich and “novel” existence, is thanks to maximizing the stable intervals between anxiety, distress, and exhaustion – and avoiding alien neurotypical social expectations and toxic environments as much as possible.

Here is a simple formula that I have followed:

Life among NTs is HELL. I deserve to “reserve” as much time as possible for my intrinsically satisfying interests; for pursuit of knowledge, experiences and activities that enable me to become as “authentic” to “whoever and whatever I am” as possible.

This realization came long, long before diagnosis, and I had to accept that a distinct possibility was that there was no “authentic me” and if there was, it might be a scary discovery. But, ever-present Asperger curiosity and dogged persistence would accept no other journey. It is important to realize, that Asperger or not, this type of “classic quest” has been going on in human lives for thousands of years, and for the most part has been in defiance of social disapproval (often regarded as a serious threat) by societies world-wide, which impose on individuals the carefully constructed catalogue of roles and biographies handed down from “on high”.

The point is that the choice to “go my own way” was “asking for it” – IT being endless shit (and the accompanying anxiety) dumped on human beings existing on all levels of the Social Pyramid, but especially directed toward any group or individual who is judged to be “antisocial” or inferior. I have encountered conflicts large and small, and was exposed to “human behavior” in ways I couldn’t have imagined.

What I have confronted in “normdom” is the strange orientation of “experts” who ignore the contribution of environmental sources to hyperarousal, a physiological reaction to conditions in the environment. (Note: Fear, anxiety, and all the “emotion-words”  are merely the conscious verbal expression that infants and children ARE TAUGHT to utilize in social communication, and for social purposes) These words are not the physiological experience.

A feedback “loop” exists between the environment and the human sensory system.   The physiology of fear and anxiety is an ancient “alarm system” that promotes survival, but in the human behavior industry, anxiety has been “segregated” and  classified as a pathology – an utterly bizarre, irrational, and dangerous idea. The result is that “normal” human reactions and behavior, provided by millions of years of evolutionary processes, and which  PROTECT the individual, are now “forbidden” as “defects” in the organism itself. Social involvement and culpability are “denied” – responsibility for abuse of humans and animals by social activity is erased!

Social indoctrination: the use of media, advertising, marketing, political BS and constant “messaging” that presents “protective evolutionary alerts and reactions” (awareness of danger; physiological discomfort, stress and illness) are YOUR FAULT. You have a defective brain. It’s a lie.

Due to an entrenched system of social hierarchy (inequality), social humans continue to be determined to “wipe out” the human animal that evolved in nature, and replace it with a domesticated / manufactured / altered Homo sapiens that just like domesticated animals, will survive and reproduce in the most extreme and abusive conditions.

This “domestic” hypersocial human is today represented as the pinnacle of evolution.

Human predators (the 1 %  who occupy “power positions” at the top of the pyramid)merely want to ensure that the status quo is maintained, that is, the continued  exploitation of the  “observation” that domesticated humans will adapt to any abuse – and still serve the hierarchy. This “idea” also allows for the unconscionable torture and abuse of animals.

The “expert” assumption is that a normal, typical, socially desirable human, as defined by the “human behavior” priesthood, can endure any type and degree of torture, stress, abuse, both chronic or episodic, and come out of the experience UNCHANGED; undamaged and exploitable. Any variation from this behavioral prescription is proof of a person’s deviance, inferiority and weakness.

The most blatant example of this “attitude” is the epidemic of PTSD and suicide in soldiers returning from HELL in combat. Not that many wars ago, militaries literally “executed”  soldiers suffering from this “weakness, cowardice and treason” on the battlefield, or “exiled” them to asylums as subhuman and defective ‘mistakes”. Now we ship soldiers home who have suffered extreme trauma and “treat them” so badly, that suicide has become the only relief for many. Having the afflicted remove him or herself, rather than “murdering” them is considered to be compassionate progress.  

And my point is about relief: I concluded long ago that chronic and episodic “hyperarousal” must be treated immediately with whatever works; in my experience, that means medication. Despite limiting one’s “exposure” to toxic social environments, one cannot escape the damage done to human health and sanity.

Some relief can be had by employing activities and adjustments in thinking patterns, that often (usually by trial and error) can mitigate physical damage. But what we must remember is that anxiety, fear, distress and the “urge to flee” are healthy responses to horrible human environments. How many mass migrations of “refugees” are there at any time, with thousands, and even millions of people, seeking “new places” to live a life that is proper to a healthy human?

 

 

 

Exciting Paper / Enhanced Perception (Autism)

Royal Society Publishing
Note: I think this “pattern-structure perception” applies also to Asperger individuals who are visual sensory thinkers, but proficient in verbal language. That is, it’s not an “either or” situation in actual brains. (This “either or” insistence is NT projection of their black and white, oppositional, competitive obsession). Specific brains can and do process and sensory info and utilize verbal language; these are not “matter-antimatter” interactions as NTs imagine.  

Enhanced perception in savant syndrome: patterns, structure and creativity

Laurent Mottron, Michelle Dawson, Isabelle Soulières / .

Full paper: http://rstb.royalsocietypublishing.org/content/364/1522/1385.long

5. Savant creativity: a different relationship to structure

Savant performance cannot be reduced to uniquely efficient rote memory skills (see Miller 1999, for a review), and encompasses not only the ability for strict recall, requiring pattern completion, but also the ability to produce creative, new material within the constraints of a previously integrated structure, i.e. the process of pattern generation. This creative, flexible, albeit structure-guided, aspect of savant productions has been clearly described (e.g. Pring 2008). It is analogous to what Miller (1999, p. 33) reported on error analyses in musical memory: ‘savants were more likely to impose structure in their renditions of musical fragments when it was absent in the original, producing renditions that, if anything, were less ‘literal’ than those of the comparison participants’. Pattern generation is also intrinsic to the account provided by Waterhouse (1988).

The question of how to produce creative results using perceptual mechanisms, including those considered low-level in non-autistics, is at the very centre of the debate on the relationship between the nature of the human factor referred to as intelligence and the specific cognitive and physiological mechanisms of savant syndrome (maths or memory, O’Connor & Hermelin 1984; rules or regularities, Hermelin & O’Connor 1986; implicit or explicit, O’Connor 1989; rhyme or reason, Nettlebeck 1999). It also echoes the questions raised by recent evidence of major discrepancies in the measurement of autistic intelligence according to the instruments used (Dawson et al. 2007).

A combination of multiple pattern completions at various scales could explain how a perceptual mechanism, apparently unable to produce novelty and abstraction in non-autistics, contributes in a unique way to autistic creativity. The atypically independent cognitive processes characteristic of autism allow for the parallel, non-strategic integration of patterns across multiple levels and scales, without information being lost owing to the automatic hierarchies governing information processing and limiting the role of perception in non-autistics. (Remember; in visual perception and memory the image is the content; therefore it is dense with detail and connections – “patterns”. NTs “fill-in” the gaps in their perception with “magical / supernatural” explanations for phenomena)

An interest in internal structure may also explain a specific, and new, interest for domains never before encountered. For example, a savant artist newly presented with the structure of visual tones learned this technique more rapidly and proficiently than typical students (Pring et al. 1997). In addition, the initial choice of domain of so-called restricted interest demonstrates the versatility of the autistic brain, in the sense that it represents spontaneous orientation towards, and mastering of, a new domain without external prompts or instruction. How many such domains are chosen would then depend on the free availability of the kinds, amounts and arrangements of information which define the structure of the domain, according to aspects of information that autistics process well. Generalization also occurs under these circumstances, for example, to materials that share with the initial material similar formal properties, i.e. those that allow ‘veridical mapping’ with the existing ability. In Pring & Hermelin (2002), a savant calendar calculator with absolute pitch displayed initial facility with basic number–letter associations, and was able to quickly learn new associations and provide novel manipulations of these letter–number correspondences.

The apparently ‘restricted’ aspects of restricted interests are at least partly related to pattern detection, in that there are positive emotions in the presence of material presenting a high level of internal structure, and a seeking out of material related in form and structure to what has already been encountered and memorized. Limitation of generalization may also be explained by the constraints inherent in the role of similarity in pattern detection, which would prevent an extension of isomorphisms to classes of elements that are excessively dissimilar to those composing the initial form. In any case, there is no reason why autistic perceptual experts would be any less firm, diligent or enthusiastic in their specific preferences for materials and domains than their non-autistic expert counterparts. However, it must also be acknowledged that the information autistics require in order to choose and generalize any given interest is likely to be atypical in many respects (in that this may not be the information that non-autistics would require), and may not be freely or at all available. In addition, the atypical ways in which autistics and savants learn well have attracted little interest and are as yet poorly studied and understood, such that we remain ignorant as to the best ways in which to teach these individuals (Dawson et al. 2008). Therefore, a failure to provide autistics or savants with the kinds of information and opportunities from which they can learn well must also be considered as explaining apparent limitations in the interests and abilities of savant and non-savant autistics (see also Heaton 2009).

6. Structure, emotion and expertise

While reliable information about the earliest development or manifestations of savant abilities in an individual is very sparse, biographies of some savants suggest a sequence starting with uninstructed, sometimes apparently passive, but intent and attentive (e.g. Horwitz et al. 1965; Selfe 1977; Sacks 1995) orientation to and study of their materials of interest. In keeping with our proposal about how savants perceive and integrate patterns, materials that spontaneously attract interest may be at any scale or level within a structure, including those that appear unsuitable for the individual’s apparent developmental level. For example, Paul, a 4-year-old autistic boy (with a presumed mental age of 17 months), who was found to have outstanding literacy, exceeding that of typical 9-year olds, intently studied newspapers starting before his second birthday (Atkin & Lorch 2006). It should not be surprising that in savants, the consistent or reliable availability of structured or formatted information and materials can influence the extent of the resulting ability. For example, the types of words easily memorized by NM, proper names, in addition to being redundant in Quebec, share a highly similar structural presentation in the context where NM learned them, including phone books, obituaries and grave markers (Mottron et al. 1996, 1998). However, a fuller account of why there is the initial attraction to and preference for materials with a high degree of intrinsic organization, and for specific kinds of such structured materials in any particular individual, is necessary.

Positive emotions are reported in connection with the performance of savant abilities (e.g. Selfe 1977; Sloboda et al. 1985; Miller 1989). Therefore, it is possible that a chance encounter with structured material gives birth to an autistic special interest, which then serves as the emotional anchor of the codes involved in savant abilities, associated with both positive emotions and a growing behavioural orientation towards similar patterns (Mercier et al. 2000). Brain structures involved in the processing of emotional content can be activated during attention to objects of special interest in autistics (Grelotti et al. 2005). So-called repetitive play in autism, associated with positive emotions, consists of grouping objects or information encompassing, as in the codes described above, series of similar or equivalent attributes. In addition, in our clinical experience, we observe that repetitive autistic movements are often associated with positive emotions.

One possibility worth further investigation would be that patterns in structured materials, in themselves, may trigger positive emotions in autism and that arbitrary alterations to these patterns may produce negative emotions (Yes! Stop f—ing with our interests!)—a cognitive account of the insistence on sameness with which autistics have been characterized from the outset (Kanner 1943). Individuals who excel in detecting, integrating and completing patterns at multiple levels and scales, as we propose is the case with savants, would have a commensurate sensitivity to anomalies within the full array of perceived similarities and regularities (e.g. O’Connell 1974). In Hermelin & O’Connor (1990), an autistic savant (with apparently very limited language skills) known for his numerical abilities, including factorization, but who had never been asked to identify prime numbers, instantly expressed—without words—his perfect understanding of this concept when first presented with a prime number. The superior ability of autistics to detect anomalies—departures from pattern or similarity—has accordingly been reported (e.g. Plaisted et al. 1998; Baron-Cohen 2005).

Overexposure to material highly loaded with internal structure plausibly favours implicit learning and storage of information units based on their perceptual similarity, and more generally, of expertise effects. Savants benefit from expertise effects to the same extent as non-autistic experts (Miller 1999). Among expertise effects is the recognition of units at a more specific level compared with non-experts and the suppression of negative interference effects among members of the same category. Reduced interference has been demonstrated between lists of proper names in a savant memorizer (Mottron et al. 1998). Another expertise effect is the ‘frequency effect’, the relative ease with which memorization and manipulation of units, to which an individual has been massively exposed, can be accomplished (Segui et al. 1982). For example, Heavey et al. (1999) found that calendar calculators recalled more calendar-related items than controls matched for age, verbal IQ and diagnosis, but exhibited unremarkable short- or long-term recall of more general material unrelated to calendars. These two aspects of expertise would favour the emergence and the stabilization of macrounits (e.g. written code in a specific language, or set of pitches arranged by harmonic rules), which are perceptually the spatio-temporal conjunctions of recognizable patterns related by isomorphisms. Conversely, pattern detection may be unremarkable or even diminished in the case of arbitrarily presented unfamiliar material (Frith 1970).

Identifying savant syndrome as aptitude, material availability and expertise, combined with an autistic brain characterized by EPF, is also informative on the relationship between savant syndrome and peaks of ability in non-savant autistics. Perceptual peaks are largely measured using materials with which the participant has not been trained, whereas savant syndrome encompasses the effects of a life spent pursuing the processing of specific information and materials. We therefore forward the possibility that the range and extent of autistic abilities may be revealed only following access to specific kinds, quantities and arrangements of information. However, we do not expect savant abilities to differ from non-savant autistic peaks of ability in their basic mechanisms. According to this understanding of differences between savant and non-savant autistics, the fact that not all autistics are savants is no more surprising than the fact that not all non-autistics are experts.

NTs fill-in the gaps in their perception of the environment with magical beliefs; magical thinking is a developmental stage in young children.  

What psychologists say: Stage by Stage, age 3 – 4

  • Threes and fours often use magical thinking to explain causes of events.
  • Preschoolers sometimes assign their own thinking as a reason for occurrences that are actually out of their control.
  • Three- and 4-year-olds believe, with their powers of magical thinking, that they can change reality into anything they wish.

ASD / AS Intelligence Revisited / Guess what? We’re intelligent. DUH!

PLoS One. 2011; 6(9): e25372.
Published online 2011 Sep 28. doi:  10.1371/journal.pone.0025372
PMID: 21991394

The Level and Nature of Autistic Intelligence II: What about Asperger Syndrome?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3182210/

Isabelle Soulières, 1 , 2 , * Michelle Dawson, 1 Morton Ann Gernsbacher, 3 and Laurent Mottron  / Efthimios M. C. Skoulakis, Editor

Introduction

Individuals on the autistic spectrum are currently identified according to overt atypicalities in socio-communicative interactions, focused interests and repetitive behaviors [1]. More fundamentally, individuals on the autistic spectrum are characterized by atypical information processing across domains (social, non-social, language) and modalities (auditory, visual), raising the question of how best to assess and understand these individuals’ intellectual abilities. Early descriptions [2], [3] and quantifications (e.g. [4]) of their intelligence emphasized the distinctive unevenness of their abilities. While their unusual profile of performance on popular intelligence test batteries remains a durable empirical finding [5], it is eclipsed by a wide range of speculative deficit-based interpretations. (based on socio-cultural arrogance) Findings of strong performance on specific tests have been regarded as aberrant islets of ability arising from an array of speculated deficits (e.g., “weak central coherence”; [6]) and as incompatible with genuine human intelligence.

For example, Hobson ([7], p. 211) concluded that regardless of strong measured abilities in some areas, autistics lack “both the grounding and the mental flexibility for intelligent thought.

Thus, there is a long-standing assumption that a vast majority of autistic individuals are intellectually impaired. In recent years, this assumption has been challenged by investigations that exploit two divergent approaches —represented by Wechsler scales of intelligence and Raven’s Progressive Matrices— to measuring human intelligence [8]. Wechsler scales estimate IQ through batteries of ten or more different subtests, each of which involves different specific oral instructions and tests different specific skills. The subtests are chosen to produce scores that, for the typical population, are correlated and combine to reflect a general underlying ability. Advantages of this approach include the availability of subtest profiles of specific skill strengths and weaknesses, index scores combining related subtests, and dichotomized Performance versus Verbal IQ scores (PIQ vs. VIQ), as well as a Full-Scale IQ (FSIQ) score. However, the range of specific skills assayed by Wechsler scales is limited (e.g., reading abilities are not included), and atypical individuals who lack specific skills (e.g., typical speech processing or speech production) or experiences (e.g., typical range of interests) may produce scores that do not reflect those individuals’ general intelligence.

In contrast, Raven’s Progressive Matrices (RPM) is a single self-paced test that minimizes spoken instruction and obviates speech production or typicality of experiences [9]. The format is a matrix of geometric designs in which the final missing piece must be selected from among an array of displayed choices. Sixty items are divided into five sets that increase progressively in difficulty and complexity, from simple figural to complex analytic items. RPM is regarded both as the most complex and general single test of intelligence [10], [11] and as the best marker for fluid intelligence, which in turn encompasses reasoning and novel problem-solving abilities [8], [12]. RPM tests flexible co-ordination of attentional control, working memory, rule inference and integration, high-level abstraction, and goal-hierarchy management [13], . These abilities, as well as fluid intelligence itself, have been proposed as areas of deficit in autistic persons, particularly when demands increase in complexity [16], [17], [18], [19].

Against these assumptions, we reported that autistic children and adults, with Wechsler FSIQ ranging from 40 to 125, score an average 30 percentile points higher on RPM than on Wechsler scales, while typical individuals do not display this discrepancy, as shown in Figure 1 [20]. RPM item difficulty, as reflected in per-item error rate, was highly correlated between the autistic and non-autistic children (r = .96). An RPM advantage for autistic individuals has been reported in diverse samples. Bolte et al. [21] tested autistic, other atypical (non-autism diagnoses), and typical participants who varied widely in their age and the version of Wechsler and RPM they were administered; autistics with Wechsler FSIQ under 85 were unique in having a relative advantage on RPM. Charman et al. [22] reported significantly higher RPM than Wechsler scores (FSIQ and PIQ) for a large population-based sample of school-aged autistic spectrum children. In Morsanyi and Holyoak [23], autistic children, who were matched with non-autistic controls on two Wechsler subtests (Block Design and Vocabulary), displayed a numeric, though not significant, advantage within the first set of Raven’s Advanced Progressive Matrices items.

The nature of autistic intelligence was also investigated in an fMRI study [24]. Autistics and non-autistics matched on Wechsler FSIQ were equally accurate in solving the 60 RPM items presented in random order, but autistics performed dramatically faster than their controls. This advantage, which was not found in a simple perceptual control task, ranged from 23% for easier RPM items to 42% for complex analytic RPM items.

Autistics’ RPM task performance was associated with greater recruitment of extrastriate areas and lesser recruitment of lateral prefrontal and medial posterior parietal cortex, illustrating their hallmark enhanced perception [25].

One replicated manifestation of autistics’ enhanced perception is superior performance on the Wechsler Block Design subtest, suggesting a visuospatial peak of ability [26]. Even when autistics’ scores on all other Wechsler subtests fall below their RPM scores, their Block Design and RPM scores lie at an equivalent level [20].

Thus, enhanced occipital activity, superior behavioral performance on RPM, and visuospatial peaks co-occur in individuals whose specific diagnosis is autism, suggesting an increased and more autonomous role of perception in autistic reasoning and intelligence [24].

But what about individuals whose specific diagnosis is Asperger syndrome? In Dawson et al.’s previous investigations of autistics’ RPM performance, Asperger individuals were excluded. Asperger syndrome is a relatively low-prevalence [27] autistic spectrum diagnosis characterized by intelligence scores within the normal range (non-Asperger autistics may have IQs in any range). Two main distinctions between the specific diagnosis of autism and Asperger syndrome are relevant to the question of intelligence in the autistic spectrum. First, while their verbal and nonverbal communication is not necessarily typical across development, Asperger individuals do not, by diagnostic definition, exhibit characteristic autistic delays and anomalies in spoken language. While both autistic and Asperger individuals produce an uneven profile on Wechsler subtests, Asperger individuals’ main strengths, in contrast with those of autistics (see [20]), are usually seen in verbal subtests (count me in)  (as illustrated in Figure 2; see also [28]). Although RPM is often deemed a “nonverbal” test of intelligence, in practice typical individuals often rely on verbal abilities to perform most RPM items. (NOTE: I have commented on this in another post, regarding the pre-test tutoring available to students, during which the “rules of the game” are explained. Is this “cheating” in that “fluid intelligence” and not learned procedures, are supposedly being measured?)  

Second, at a group level, Asperger individuals do not display the autistic visuospatial peak in Wechsler scales; rather, their Block Design subtest performance tends to be unremarkably equivalent to their FSIQ (see Figure 2 and also [32]). The question of whether Asperger individuals display the autistic advantage on RPM over Wechsler is thus accompanied by the possibility that the Asperger subgroup represents an avenue for further investigating the nature of this discrepancy. (I am quite baffled at times by my “native” Asperger experience, which is overwhelmingly visual-sensory, but that verbal language is a “go to tool” for translating that experience into “acceptable” form. Very practical! Why does this “arrangement” seem to occur in Asperger’s?)

Our goal was to investigate whether the autistic advantage on RPM is also characteristic of Asperger syndrome and, further, whether RPM performance reveals a fundamental property of intelligence across the autistic spectrum. If the mechanism underlying autistics’ advantage on RPM is limited to visuospatial peaks or to language difficulties disproportionately hampering Wechsler performance, then the advantage should not be found in Asperger individuals. Indeed, as predicted by Bolte et al. [21], Asperger individuals should perform even better on Wechsler scales than on RPM. If instead the underlying mechanism is more general and versatile, then Asperger individuals should demonstrate at least some advantage on RPM. Preliminary findings have suggested this to be the case. In one recent study, Asperger children (age 6–12) obtained significantly higher raw scores on RPM than did typical children matched on age and Wechsler performance [33].

For all the “poo-bah” and graphs, go to original paper (and related papers):  https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3182210/

Discussion

Asperger individuals differ from autistics in their early speech development, in having Wechsler scores in the normal range, and in being less likely to be characterized by visuospatial peaks. In this study, Asperger individuals presented with some significant advantages, and no disadvantages, on RPM compared to Wechsler FSIQ, PIQ, and VIQ. Asperger adults demonstrated a significant advantage, relative to their controls, in their RPM scores over their Wechsler FSIQ and PIQ scores, while for Asperger children this advantage was found for their PIQ scores. For both Asperger adults and children and strikingly similar to autistics in a previous study [20], their best Wechsler performances were similar in level to, and therefore plausibly representative of, their general intelligence as measured by RPM.

We have proposed that autistics’ cognitive processes function in an atypically independent way, leading to “parallel, non-strategic integration of patterns across multiple levels and scales” [36] and to versatility in cognitive processing [26].

Such “independent thinking” suggests ways in which apparently specific or isolated abilities can co-exist with atypical but flexible, creative, and complex achievements. Across a wide range of tasks, including or perhaps

especially in complex tasks, autistics do not experience to the same extent the typical loss or distortion of information that characterizes non-autistics’ mandatory hierarchies of processing

Therefore, autistics can maintain more veridical representations (e.g. representations closer to the actual information present in the environment) when performing high level, complex tasks. The current results suggest that such a mechanism is also present in Asperger syndrome and therefore represents a commonality across the autistic spectrum. Given the opportunity, different subgroups of autistics may advantageously apply more independent thinking to different available aspects of information: verbal information, by persons whose specific diagnosis is Asperger’s, and perceptual information, by persons whose specific diagnosis is autism.

One could alternatively suggest that the construct measured by RPM is relative and thus would reflect processes other than intelligence in autistic spectrum individuals. However, a very high item difficulty correlation is observed between autistic individuals and typical controls, as well as between Asperger individuals and typical controls. As previously noted [20], these high correlations indicate that RPM is measuring the same construct in autistics and non-autistics, a finding now extended to Asperger syndrome.

Therefore, dismissing these RPM findings as not reflecting genuine human intelligence in autistic and Asperger individuals would have the same effect for non-autistic individuals.

The discrepancies here revealed between alternative measures of intelligence in a subgroup of individuals underline the ambiguous non-monolithic definition of intelligence. Undoubtedly, autistics’ intelligence is atypical and may not be as easily assessed and revealed with standard instruments. But given the essential and unique role that RPM has long held in defining general and fluid intelligence (e.g., [37]),

we again suggest that both the level and nature of autistic intelligence have been underestimated.

Thus, while there has been a long tradition of pursuing speculated autistic deficits, it is important to consider the possibility of strength-based mechanisms as underlying autistics’ atypical but genuine intelligence.

What the Hell is Concrete vs. Abstract Thinking? / Does anyone actually know?

https://plato.stanford.edu/entries/abstract-objects/

This “post question” is vital to untangling much of what is said by “experts” about the ASD / Asperger “way of thinking”. One reads that we are “socially stupid” because we think concretely; receive language literally; fail to “comprehend” the gloriously sophisticated and complex use of “social language” (Have a nice day! Those jeans make you look skinny!) The assertion is that concrete thinking ranks as a “lower level” type of thinking on the grandiose pyramidal system of “human social development”, which has become the only development that “counts” toward being a “true” Homo sapiens. Other “experts” claim that we are “good at” abstract thinking; math and science and engineering, but this assumes that these activities are exclusively the product of abstract thinking! Far from it. 

We have to start somewhere! 

Abstract Objects

First published Thu Jul 19, 2001; substantive revision Mon Feb 13, 2017

It is widely supposed that every entity falls into one of two categories: Some are concrete; the rest abstract. The distinction is supposed to be of fundamental significance for metaphysics and epistemology. This article surveys a number of recent attempts to say how it should be drawn.

Here we are again: this “supposed distinction” is everywhere – but I find myself muttering, as I read various articles and papers, “What the Hell is this person talking about” when they refer to abstract thinking? I “get” formal thinking in math and other systems; the need to discover, set up, find an equation or formula that is “accurate” for all cases; a generalization that “matches” certain general conditions and provides for solutions and predictions. But the rest of “reality”?

What the Hell are people talking about? Human language itself seems to be a big part of the problem – this obsessional necessity to “chop up” a smooth experiential existence into a word salad. Yes, this is my Asperger confusion and frustration with “verbal language” – like using a chainsaw to carve butter. 

from: 

The Stanford Encyclopedia of Philosophy organizes scholars from around the world in philosophy and related disciplines to create and maintain an up-to-date reference work. Principal Editor: Edward N. Zalta

1. Introduction

The abstract/concrete distinction has a curious status in contemporary philosophy. It is widely agreed that the distinction is of fundamental importance. And yet there is no standard account of how it should be drawn. There is a great deal of agreement about how to classify certain paradigm cases. Thus it is universally acknowledged that numbers and the other objects of pure mathematics are abstract (if they exist), whereas rocks and trees and human beings are concrete. Some clear cases of abstracta are classes, propositions, concepts, the letter ‘A’, and Dante’s Inferno. Some clear cases of concreta are stars, protons, electromagnetic fields, the chalk tokens of the letter ‘A’ written on a certain blackboard, and James Joyce’s copy of Dante’s Inferno.

The challenge is to say what underlies this dichotomy, either by defining the terms explicitly, or by embedding them in a theory that makes their connections to other important categories more explicit. In the absence of such an account, the philosophical significance of the contrast remains uncertain. We may know how to classify things as abstract or concrete by appeal to intuition. But in the absence of theoretical articulation, it will be hard to know what (if anything) hangs on the classification.

Well, I’m not alone in my confusion!

It should be stressed that there need not be one single “correct” way of explaining the abstract/concrete distinction. Any plausible account will classify the paradigm cases in the standard way, and any interesting account will draw a clear and philosophically significant line in the domain of objects. Yet there may be many equally interesting ways of accomplishing these two goals, and if we find ourselves with two or more accounts that do the job rather well, there will be no point in asking which corresponds to the real abstract/concrete distinction. This illustrates a general point: when technical terminology is introduced in philosophy by means of examples but without explicit definition or theoretical elaboration, the resulting vocabulary is often vague or indeterminate in reference. In such cases, it is normally pointless to seek a single correct account. A philosopher may find himself asking questions like, ‘What isis idealism?’ or ‘What isis a substance?’ and treating these questions as difficult questions about the underlying nature of a certain determinate philosophical category. A better approach is to recognize that in many cases of this sort, we simply have not made up our minds about how the term is to be understood, and that what we seek is not a precise account of what this term already means, but rather a proposal for how it might fruitfully be used in the future. Anyone who believes that something in the vicinity of the abstract/concrete distinction matters for philosophy would be well advised to approach the project of explaining the distinction with this in mind.

2. Historical Remarks

The contemporary distinction between abstract and concrete is not an ancient one. Indeed, there is a strong case for the view that despite occasional anticipations, it played no significant role in philosophy before the 20th century. The modern distinction bears some resemblance to Plato’s distinction between Forms and Sensibles. But Plato’s Forms were supposed to be causes par excellence, whereas abstract objects are generally supposed to be causally inert in every sense. The original ‘abstract’/‘concrete’ distinction was a distinction among words or terms. Traditional grammar distinguishes the abstract noun ‘whiteness’ from the concrete noun ‘white’ without implying that this linguistic contrast corresponds to a metaphysical distinction in what these words stand for. In the 17th century this grammatical distinction was transposed to the domain of ideas. Locke speaks of the general idea of a triangle which is “neither Oblique nor Rectangle, neither Equilateral, Equicrural nor Scalenon [Scalene]; but all and none of these at once,” remarking that even this idea is not among the most “abstract, comprehensive and difficult” (Essay IV.vii.9). Locke’s conception of an abstract idea as one that is formed from concrete ideas by the omission of distinguishing detail was immediately rejected by Berkeley and then by Hume. But even for Locke there was no suggestion that the distinction between abstract ideas and concrete or particular ideas corresponds to a distinction among objects. “It is plain, …” Locke writes, “that General and Universal, belong not to the real existence of things; but are Inventions and Creatures of the Understanding, made by it for its own use, and concern only signs, whether Words or Ideas” (III.iii.11). (I agree)

The abstract/concrete distinction in its modern form is meant to mark a line in the domain of objects or entities. So conceived, the distinction becomes a central focus for philosophical discussion only in the 20th century. The origins of this development are obscure, but one crucial factor appears to have been the breakdown of the allegedly exhaustive distinction between the mental and the material that had formed the main division for ontologically minded philosophers since Descartes. One signal event in this development is Frege’s insistence that the objectivity and aprioricity of the truths of mathematics entail that numbers are neither material beings nor ideas in the mind. If numbers were material things (or properties of material things), the laws of arithmetic would have the status of empirical generalizations. If numbers were ideas in the mind, then the same difficulty would arise, as would countless others. (Whose mind contains the number 17? Is there one 17 in your mind and another in mine? In that case, the appearance of a common mathematical subject matter is an illusion.) In The Foundations of Arithmetic (1884), Frege concludes that numbers are neither external ‘concrete’ things nor mental entities of any sort. Later, in his essay “The Thought” (Frege 1918), he claims the same status for the items he calls thoughts—the senses of declarative sentences—and also, by implication, for their constituents, the senses of subsentential expressions. Frege does not say that senses are ‘abstract’. He says that they belong to a ‘third realm’ distinct both from the sensible external world and from the internal world of consciousness. Similar claims had been made by Bolzano (1837), and later by Brentano (1874) and his pupils, including Meinong and Husserl. The common theme in these developments is the felt need in semantics and psychology as well as in mathematics for a class of objective (i.e., non-mental) supersensible entities. As this new ‘realism’ was absorbed into English speaking philosophy, the traditional term ‘abstract’ was enlisted to apply to the denizens of this ‘third realm’.

Philosophers who affirm the existence of abstract objects are sometimes called platonists; those who deny their existence are sometimes called nominalists. This terminology is lamentable, since these words have established senses in the history of philosophy, where they denote positions that have little to do with the modern notion of an abstract object. However, the contemporary senses of these terms are now established, and so the reader should be aware of them. (In Anglophone philosophy, the most important source for this terminological innovation is Quine. See especially Goodman and Quine 1947.) In this connection, it is essential to bear in mind that modern platonists (with a small ‘p’) need not accept any of the distinctive metaphysical and epistemological doctrines of Plato, just as modern nominalists need not accept the distinctive doctrines of the medieval nominalists. Insofar as these terms are useful in a contemporary setting, they stand for thin doctrines: platonism is the thesis that there is at least one abstract object; nominalism is the thesis that the number of abstract objects is exactly zero (Field 1980). The details of this dispute are discussed in the article on nominalism in metaphysics. (See also the entry on platonism in metaphysics.) The aim of the present article is not to describe the case for or against the existence of abstract objects, but rather to say what an abstract object would be if such things existed.

3. The Way of Negation

Frege’s way of drawing the abstract/concrete distinction is an instance of what Lewis (1986a) calls the Way of Negation, according to which abstract objects are defined as those which lack certain features possessed by paradigmatic concrete objects. Nearly every explicit characterization in the literature follows this model. Let us review some of the options.

According to the account implicit in Frege’s writings, An object is abstract if and only if it is both non-mental and non-sensible.

Here the first challenge is to say what it means for a thing to be ‘non-mental’, or as we more commonly say, ‘mind-independent’. The simplest approach is to say that a thing depends on the mind when it would not (or could not) have existed if minds had not existed. But this entails that tables and chairs are mind-dependent, and that is not what philosophers who employ this notion have in mind. To call an object ‘mind-dependent’ in a metaphysical context is to suggest that it somehow owes its existence to mental activity, but not in the boring ‘causal’ sense in which ordinary artifacts owe their existence to the mind. What can this mean? One promising approach is to say that an object should be reckoned mind-dependent when, by its very nature, it exists at a time if and only if it is the object or content of some mental state or process at that time. This counts tables and chairs as mind-independent, since they might survive the annihilation of thinking things. But it counts paradigmatically mental items, like the purple afterimage of which I am now aware, as mind-dependent, since it presumably lies in the nature of such items to be objects of conscious awareness whenever they exist. However, it is not clear that this account captures the full force of the intended notion. Consider, for example, the mereological fusion of my afterimage and your headache. This is surely a mental entity if anything is. But it is not necessarily the object of a mental state. (The fusion can exist even if no one is thinking about itit .) A more generous conception would allow for mind-dependent objects that exist at a time in virtue of mental activity at that time, even if the object is not the object of any single mental state or act. The fusion of my afterimage plus your headache is mind-dependent in the second sense but not the first. That is a reason to prefer the second account of mind-dependence.

If we understand the notion of mind-dependence in this way, it is a mistake to insist that abstract objects be mind-independent. To strike a theme that will recur, it is widely supposed that sets and classes are abstract entities—even the impure sets whose urelements are concrete objects. Any account of the abstract/concrete distinction that places set-theoretic constructions like {{ Alfred, {{ Betty, {{ Charlie, Deborah}}}}}} on the concrete side of the line will be seriously at odds with standard usage. With this in mind, consider the set whose sole members are my afterimage and your headache, or some more complex set-theoretic object based on these items. If we suppose, as is plausible, that an impure set exists at a time only when its members exist at that time, this will be a mind-dependent entity in the generous sense. But it is also presumably an abstract entity. Gee whiz!

A similar problem arises for so-called abstract artifacts, like Jane Austen’s novels and the characters that inhabit them. Some philosophers regard such items as eternally existing abstract entities that worldly authors merely ‘describe’ or ‘encode’ but do not create. (Really?) But of course the commonsensical view is that Austen created Pride and Prejudice and Elizabeth Bennett, and there is no good reason to deny this (Thomasson 1999; cf. Sainsbury 2009; see also the entry on fiction). If we take this commonsensical approach, there will be a clear sense in which these items depend for their existence on Austen’s mental activity, and perhaps on the mental activity of subsequent readers. These items may not count as mind-dependent in either of the senses canvassed above, since Pride and Prejudice can presumably exist at a time even if no one happens to be thinking at that time. (If the world took a brief collective nap, Pride and Prejudice would not pop out of existence.) But they are obviously mind-dependent in some not-merely-causal sense. And yet they are still presumably abstract objects. For these reasons, it is probably a mistake to insist that abstract objects be mind-independent. (For more on mind-dependence, see Rosen 1994.)

Frege’s proposal in its original form also fails for other reasons. Quarks and electrons are neither sensible nor mind-dependent. And yet they are not abstract objects. A better version of Frege’s proposal would hold that:

An object is abstract if and only if it is both non-physical and non-mental.

This approach may well draw an important line; but it inherits the familiar problem of saying what it is for a thing to be a physical object (Crane and Mellor 1990). For discussion, see the entry on physicalism.

3.1 The Non-Spatiality Criterion

Contemporary purveyors of the Way of Negation typically amend Frege’s criterion by requiring that abstract objects be non-spatial, causally inefficacious, or both. Indeed, if any characterization of the abstract deserves to be regarded as the standard one, it is this:

An object is abstract if and only if it is non-spatial and causally inefficacious.

This standard account nonetheless presents a number of perplexities.

Consider first the requirement that abstract objects be non-spatial (or non-spatiotemporal). Some of the paradigms of abstractness are non-spatial in a straightforward sense. It makes no sense to ask where the cosine function was last Tuesday. Or if it makes sense to ask, the only sensible answer is that it was nowhere. Similarly, it makes no good sense to ask when the Pythagorean Theorem came to be. Or if it does make sense to ask, the only sensible answer is that it has always existed, or perhaps that it does not exist ‘in time’ at all. These paradigmatic ‘pure abstracta’ have no non-trivial spatial or temporal properties. They have no spatial location, and they exist nowhere in particular in time.

However, some abstract objects appear to stand in a more interesting relation to space. Consider the game of chess, for example. Some philosophers will say that chess is like a mathematical object, existing nowhere and ‘no when’—either eternally or outside of time altogether. But that is not the most natural view. The natural view is that chess was invented at a certain time and place (though it may be hard to say exactly where or when); that before it was invented it did not exist at all; that it was imported from India into Persia in the 7th century; that it has changed over the years, and so on. The only reason to resist this natural account is the thought that since chess is clearly an abstract object—it’s not a physical object, after all!—and since abstract objects do not exist in space and time—by definition!—chess must resemble the cosine function in its relation to space and time. And yet one might with equal justice regard the case of chess and other abstract artifacts as counterexamples to the hasty view that abstract objects possess only trivial spatial and temporal properties.

Should we then abandon the non-spatiotemporality criterion? Not necessarily. Even if there is a sense in which some abstract entities possess non-trivial spatiotemporal properties, it might still be said that concrete entities exist in spacetime in a distinctive way. If we had an account of this distinctive manner of spatiotemporal existence characteristic of concrete objects, we could say: An object is abstract (if and) only if it fails to exist in spacetime in that way.

One way to implement this approach is to note that paradigmatic concrete objects tend to occupy a relatively determinate spatial volume at each time at which they exist, or a determinate volume of spacetime over the course of their existence. It makes sense to ask of such an object, ‘Where is it now, and how much space does it occupy?’ even if the answer must sometimes be somewhat vague. By contrast, even if the game of chess is somehow ‘implicated’ in space and time, it makes no sense to ask how much space it now occupies. (To the extent that this does make sense, the only sensible answer is that it occupies no space at all, which is not to say that it occupies a spatial point.) And so it might be said:

An object is abstract (if and) only if it fails to occupy anything like a determinate region of space (or spacetime).

This promising idea raises several questions. First, it is conceivable that certain items that are standardly regarded as abstract might nonetheless occupy determinate volumes of space and time. Consider, for example, the various sets composed from Peter and Paul: {{ Peter, Paul},{},{ Peter, {{ Peter, {{{{ Paul}}}}}}}} , etc. We don’t normally ask where such things are, or how much space they occupy. And indeed many philosophers will say that the question makes no sense, or that the answer is a dismissive ‘nowhere, none’. But this answer is not forced upon us by anything in set theory or metaphysics. Even if we grant that pure sets stand in only the most trivial relations to space, it is open to us to hold, as some philosophers have done, that impure sets exist where and when their members do (Lewis 1986a). It is not unnatural to say that a set of books is located on a certain shelf in the library, and indeed, there are some theoretical reasons for wanting to say this (Maddy 1990). On a view of this sort, we face a choice: we can say that since impure sets exist in space, they are not abstract objects after all; or we can say that since impure sets are abstract, it was a mistake to suppose that abstract objects cannot occupy space.

One way to finesse this difficulty would be to note that even if impure sets occupy space, they do so in a derivative manner. The set {{ Peter, Paul}} occupies a location in virtue of the fact that its concrete elements, Peter and Paul, together occupy that location. The set does not occupy the location in its own right. With that in mind, it might be said that:

An object is abstract (if and) only if it either fails to occupy space at all, or does so only in virtue of the fact some other items—in this case, its urelements—occupy that region.

But of course Peter himself occupies a region in virtue of the fact that his parts—his head, hands, etc.—together occupy that region. So a better version of the proposal would say:

An object is abstract (if and) only if it either fails to occupy space at all, or does so of the fact that some other items that are not among its parts occupy that region.

This approach appears to classify the cases fairly well, but it is somewhat artificial. Moreover it raises a number of questions. What are we to say about the statue that occupies a region of space, not because its parts are arrayed in space, but rather because its constituting matter occupies that region? And what about the unobserved electron, which according to some interpretations of quantum mechanics does not really occupy a region of space at all, but rather stands in some more exotic relation to the spacetime it inhabits? Suffice it to say that a philosopher who regards ‘non-spatiality’ as a mark of the abstract, but who allows that some abstract objects may have non-trivial spatial properties, owes us an account of the distinctive relation to space and spacetime that sets paradigmatic concreta apart.

Perhaps the most important question about the ‘non-spatiality’ criterion concerns the classification of the parts of space itself. Let us suppose that space or spacetime exists, not just as an object of pure mathematics, but as the arena in which physical objects and events are somehow arrayed. Physical objects are located ‘in’ or ‘at’ regions of space, and so count as concrete according to the non-spatiality criterion. But what about the points and regions of space itself? There has been some debate about whether a commitment to spacetime substantivalism is consistent with the nominalist’s rejection of abstract entities (Field 1980, 1989; Malament 1982). If we define the abstract as the ‘non-spatial’, this debate reduces to the question whether space itself is to be reckoned ‘spatial’. But surely that is a verbal question. We can extend existing usage so as to allow that points and regions of space are located ‘at’ themselves—or not, according to taste. The philosopher who thinks that there is a serious question about whether the parts of space count as concrete would thus do well to characterize the abstract/concrete distinction in other terms.

3.2 The Causal Inefficacy Criterion

According to the most widely accepted versions of the Way of Negation:

An object is abstract (if and) only if it is causally inefficacious.

Concrete objects, whether mental or physical, have causal powers; numbers and functions and the rest make nothing happen. There is no such thing as causal commerce with the game of chess itself (as distinct from its concrete instances). And even if impure sets do in some sense exist in space, it is easy enough to believe that they make no distinctive causal contribution to what transpires. Peter and Paul may have effects individually. They may even have effects together that neither has on his own. But these joint effects are naturally construed as effects of two concrete objects acting jointly, or perhaps as effects of their mereological aggregate (itself a paradigm concretum), rather than as effects of some set-theoretic construction. Suppose Peter and Paul together tip a balance. If we entertain the possibility that this event is caused by a set, we shall have to ask which set caused it: the set containing just Peter and Paul? Some more elaborate construction based on them? Or is it perhaps the set containing the molecules that compose Peter and Paul? This proliferation of possible answers suggests that it was a mistake to credit sets with causal powers in the first place. This is good news for those who wish to say that all sets are abstract.

(Note, however, that some writers identify ordinary physical events—causally efficacious items par excellence—with sets. For David Lewis, for example, an event like the fall of Rome is an ordered pair whose first member is a region of spacetime, and whose second member is a set of such regions (Lewis 1986b). On this account, it would be disastrous to say both that impure sets are abstract objects, and that abstract objects are non-causal.)

The idea that causal inefficacy constitutes a sufficient condition for abstractness is somewhat at odds with standard usage. Some philosophers believe in ‘epiphenomenal qualia’: objects of conscious awareness (sense data), or qualitative conscious states that may be caused by physical processes in the brain, but which have no downstream causal consequences of their own (Jackson 1982; Chalmers 1996). These items are causally inefficacious if they exist, but they are not normally regarded as abstract. The proponent of the causal inefficacy criterion might respond by insisting that abstract objects are distinctively neither causes nor effects. But this is perilous. Abstract artifacts like Jane Austen’s novels (as we normally conceive them) come into being as a result of human activity. The same goes for impure sets, which come into being when their concrete urelements are created. These items are clearly effects in some good sense; yet they remain abstract if they exist at all. It is unclear how the proponent of the strong version of the causal inefficacy criterion (which views causal inefficacy as both necessary and sufficient for abstractness) might best respond to this problem.

Apart from this worry, there are no decisive intuitive counterexamples to this account of the abstract/concrete distinction. The chief difficulty—and it is hardly decisive—is rather conceptual. It is widely maintained that causation, strictly speaking, is a relation among events or states of affairs. If we say that the rock—an object—caused the window to break, what we mean is that some event or state (or fact or condition) involving the rock caused the break. If the rock itself is a cause, it is a cause in some derivative sense. But this derivative sense has proved elusive. The rock’s hitting the window is an event in which the rock ‘participates’ in a certain way, and it is because the rock participates in events in this way that we credit the rock itself with causal efficacy. But what is it for an object to participate in an event? Suppose John is thinking about the Pythagorean Theorem and you ask him to say what’s on his mind. His response is an event—the utterance of a sentence; and one of its causes is the event of John’s thinking about the theorem. Does the Pythagorean Theorem ‘participate’ in this event? There is surely some sense in which it does. The event consists in John’s coming to stand in a certain relation to the theorem, just as the rock’s hitting the window consists in the rock’s coming to stand in a certain relation to the glass. But we do not credit the Pythagorean Theorem with causal efficacy simply because it participates in this sense in an event which is a cause. The challenge is therefore to characterize the distinctive manner of ‘participation in the causal order’ that distinguishes the concrete entities. This problem has received relatively little attention. There is no reason to believe that it cannot be solved. But in the absence of a solution, this standard version of the Way of Negation must be reckoned a work in progress.

4. The Way of Example

In addition to the Way of Negation, Lewis identifies three main strategies for explaining the abstract/concrete distinction. According to the Way of Example, it suffices to list paradigm cases of abstract and concrete entities in the hope that the sense of the distinction will somehow emerge. If the distinction were primitive and unanalyzable, this might be the only way to explain it. But as we have remarked, this approach is bound to call the interest of the distinction into question. The abstract/concrete distinction matters because abstract objects as a class appear to present certain general problems in epistemology and the philosophy of language. It is supposed to be unclear how we come by our knowledge of abstract objects in a sense in which it is not unclear how we come by our knowledge of concrete objects (Benacerraf 1973). It is supposed to be unclear how we manage to refer determinately to abstract entities in a sense in which it is not unclear how we manage to refer determinately to other things (Benacerraf 1973, Hodes 1984). But if these are genuine problems, there must be some account of why abstract objects as such should be especially problematic in these ways. It is hard to believe that it is simply their primitive abstractness that makes the difference. It is much easier to believe that it is their non-spatiality or their causal inefficacy or something of the sort. It is not out of the question that the abstract/concrete distinction is fundamental, and that the Way of Example is the best we can do by way of elucidation. But if so, it is quite unclear why the distinction should make a difference.

5. The Way of Conflation

According to the Way of Conflation, the abstract/concrete distinction is to be identified with one or another metaphysical distinction already familiar under another name: as it might be, the distinction between sets and individuals, or the distinction between universals and particulars. There is no doubt that some authors have used the terms in this way. (Thus Quine 1953 uses ‘abstract entity’ and ‘universal’ interchangeably.) This sort of conflation is however rare in recent philosophy.

6. The Way of Abstraction

The most important alternative to the Way of Negation is what Lewis calls the Way of Abstraction. According to a longstanding tradition in philosophical psychology, abstraction is a distinctive mental process in which new ideas or conceptions are formed by considering several objects or ideas and omitting the features that distinguish them. For example, if one is given a range of white things of varying shapes and sizes; one ignores or ‘abstracts from’ the respects in which they differ, and thereby attains the abstract idea of whiteness. Nothing in this tradition requires that ideas formed in this way represent or correspond to a distinctive kind of object. But it might be maintained that the distinction between abstract and concrete objects should be explained by reference to the psychological process of abstraction or something like it. The simplest version of this strategy would be to say that an object is abstract if it is (or might be) the referent of an abstract idea, i.e., an idea formed by abstraction.

So conceived, the Way of Abstraction is wedded to an outmoded philosophy of mind. But a related approach has gained considerable currency in recent years. Crispin Wright (1983) and Bob Hale (1987) have developed an account of abstract objects that takes leave from certain suggestive remarks in Frege (1884). Frege notes (in effect) that many of the singular terms that appear to refer to abstract entities are formed by means of functional expressions. We speak of the shape of a building, the direction of a line, the number of books on the shelf. Of course many singular terms formed by means of functional expressions denote ordinary concrete objects: ‘the father of Plato’, ‘the capital of France’. But the functional terms that pick out abstract entities are distinctive in the following respect: Where ‘f(a)f(a) ’ is such an expression, there is typically an equation of the form

f(a)=f(b) if and only if Rab,f(a)=f(b) if and only if Rab,where RR is an equivalence relation. (An equivalence relation is a relation that is reflexive, symmetric and transitive.)
For example:

The direction of a=a= the direction of bb if and only if aa is parallel to bb .

The number of FF s = the number of GG s if and only if there are just as many FF s as GG s.

Moreover, these equations (or abstraction principles) appear to have a special semantic status. While they are not strictly speaking definitions of the functional expression that occurs on the left hand side, they would appear to hold in virtue of the meaning of that expression. To understand the term ‘direction’ is (in part) to know that ‘the direction of aa ’ and ‘the direction of bb ’ refer to the same entity if and only if the lines aa and bb are parallel. Moreover, the equivalence relation that appears on the right hand side of the equation would appear to be semantically and perhaps epistemologically prior to the functional expression on the left (Noonan 1978). Mastery of the concept of a direction presupposes mastery of the concept of parallelism, but not vice versa.

The availability of abstraction principles meeting these conditions may be exploited to yield an account of the distinction between abstract and concrete objects. When ‘ff ’ is a functional expression governed by an abstraction principle, there will be a corresponding kind KfKf such that:

x is a Kf if and only if, for some y,x=f(y).x is a Kf if and only if, for some y,x=f(y).

For example, xx is a cardinal number if and only if for some concept F,x=F,x= the number of FsFs . The simplest version of this approach to the Way of Abstraction is then to say that

xx is an abstract object if (and only if) xx is an instance of some kind KfKf whose associated functional expression ‘ff ’ is governed by a suitable abstraction principle.

The strong version of this account—which purports to identify a necessary condition for abstractness—is seriously at odds with standard usage. As we have noted, pure sets are paradigmatic abstract objects. But it is not clear that they satisfy the proposed criterion. According to naïve set theory, the functional expression ‘set of’ is indeed characterized by a putative abstraction principle.

The set of FF s = the set of GG s if and only if, for all x,(xx,(x is FF if and only if xx is G)G) .

But this principle is inconsistent, and so fails to characterize an interesting concept. In contemporary mathematics, the concept of a set is not introduced by abstraction. It remains an open question whether something like the mathematical concept of a set can be characterized by a suitably restricted abstraction principle. (See Burgess 2005 for a survey of recent efforts in this direction.) Even if such a principle is available, however, it is unlikely that the epistemological priority condition will be satisfied. (That is, it is unlikely that mastery of the concept of set will presuppose mastery of the equivalence relation that figures on the right hand side.) It is therefore uncertain whether the Way of Abstraction so understood will classify the objects of pure set theory as abstract entities (as it presumably must).

Similarly, as Dummett (1973) has noted, in many cases the standard names for paradigmatically abstract objects do not assume the functional form to which the definition adverts. Chess is an abstract entity. But we do not understand the word ‘chess’ as synonymous with an expression of the form ‘f(x)f(x) ’ where ‘ff ’ is governed by an abstraction principle. Similar remarks would seem to apply to such things as the English language, social justice, architecture, and Charlie Parker’s style. If so, the abstractionist approach does not provide a necessary condition for abstractness as that notion is standardly understood.

More importantly, there is some reason to believe that it fails to supply a sufficient condition. A mereological fusion of concrete objects is itself a concrete object. But the concept of a mereological fusion is governed by what appears to be an abstraction principle:

The fusion of the FF s = the fusion of the GG s if and only if the FF s and GG s cover one another,

where the FF s cover the GG s if and only if every part of every GG has a part in common with an FF . Similarly, suppose a train is a maximal string of railroad carriages, all of which are connected to one another. We may define a functional expression, ‘the train of xx ’, by means of an ‘abstraction’ principle: The train of x=x= the train of yy iff (if and only if) xx and yy are connected carriages. We may then say that xx is a train iff for some carriage yy , xx is the train of yy . The simple account thus yields the consequence that trains are to be reckoned abstract entities.

It is unclear whether these objections apply to the more sophisticated abstractionist proposals of Wright and Hale, but one feature of the simple account sketched above clearly does apply to these proposals and may serve as the basis for an objection to this version of the Way of Abstraction. The neo-Fregean approach seeks to explain the abstract/concrete distinction in semantic terms: We said that an abstract object is an object that falls in the range of a functional expression governed by an abstraction principle, where ‘ff ’ is governed by an abstraction principle when that principle holds in virtue of the meaning of ‘ff ’. This notion of a statement’s holding in virtue of the meaning of a word is notoriously problematic (see the entry on the analytic-synthetic distinction). But even if this notion makes sense, one may still complain: The abstract/concrete distinction is supposed to be a metaphysical distinction; abstract objects are supposed to differ from other objects in some important ontological respect. It should be possible, then, to draw the distinction directly in metaphysical terms: to say what it is in the objects themselves that makes some things abstract and others concrete. As Lewis writes, in response to a related proposal by Dummett:

Even if this … way succeeds in drawing a border, as for all I know it may, it tells us nothing about how the entities on opposite sides of that border differ in their nature. It is like saying that snakes are the animals that we instinctively most fear—maybe so, but it tells us nothing about the nature of snakes. (Lewis 1986a: 82)

The challenge is to produce a non-semantic version of the abstractionist criterion that specifies directly, in metaphysical terms, what the objects whose canonical names are governed by abstraction principles all have in common.

One response to this difficulty is to transpose the abstractionist proposal into more metaphysical key. We begin with the idea that each Fregean number is, by its very nature, the number of some Fregean concept, just as each Fregean direction is, by its very nature, at least potentially the direction of some concrete line. In each case, the abstract object is essentially the value of an abstraction function for a certain class of arguments. This is not a claim about the meanings of linguistic expressions. It is a claim about the essences or natures of the objects themselves. (For the relevant notion of essence, see Fine 1994). So for example, the Fregean number two (if there is such a thing) is, essentially, by its very nature, the number that belongs to a concept FF if and only if there are exactly two FF s. More generally, for each Fregean abstract object xx , there is an abstraction function ff , such that xx is essentially the value of ff for every argument of a certain kind.

Abstraction functions have two key features. First, for each abstraction function ff there is an equivalence relation RR such that it lies in the nature of ff that f(x)=f(y)f(x)=f(y) iff Rxy. Intuitively, we are to think that RR is metaphysically prior to ff , and that the abstraction function ff is defined (in whole or in part) by this biconditional. Second, each abstraction function is a generating function: its values are essentially values of that function. Many functions are not generating functions. Paris is the capital of France, but it is not essentially a capital. The number of solar planets, by contrast, is essentially a number. The notion of an abstraction function may be defined in terms of these two features:

  • ff is an abstraction function iff
  • a.for some equivalence relation RR , it lies in the nature of ff that f(x)=f(y)f(x)=f(y) iff RxyRxy ; and
  • b.for all xx , if xx is a value of ff , then it lies in the nature of xx that there is (or could be) some object yy such that x=f(y)x=f(y) .

We may then say that

xx is an abstraction if and only if, for some abstraction function ff , there is or could be an object yy such that x=f(y)x=f(y)

And

xx is an abstract object if (and only if) xx is an abstraction.

This account tells us a great deal about the distinctive natures of these broadly Fregean abstract objects. It tells us that each is, by its very nature, the value of a special sort of function, one whose nature is specified in a simple way in terms of an associated equivalence relation. It is worth stressing, however, that it does not supply much metaphysical information about these items. It does tell us whether they are located in space, whether they can stand in causal relations, and so on. It is an open question whether this somewhat unfamiliar version of the abstract/concrete distinction lines up with any of the more conventional ways of drawing the distinction outlined above.

7. Further Reading

Putnam (1975) makes the case for abstract objects on scientific grounds. Field (1980, 1989) makes the case against abstract objects. Bealer (1993) and Tennant (1997) present a priori arguments for the necessary existence of abstract entities. Balaguer (1998) argues that none of the arguments for or against the existence of abstract objects is compelling, and that there is no fact of the matter as to whether abstract things exist. The dispute over the existence of abstracta is reviewed in Burgess and Rosen (1997). Fine (2002) is a systematic study of abstraction principles in the foundations of mathematics. A general theory of abstract objects is developed axiomatically in Zalta (1983; 2016 in Other Internet Resources). Wetzel (2009) examines the type-token distinction, argues that types are abstract objects while the tokens of those types are their concrete instances, and shows how difficult it is to paraphrase away the many references to types that occur in the sciences and natural language. (See the entry on types and tokens.) Moltmann (2013) investigates the extent to which abstract objects are needed when developing a semantics of natural language.

NEXT: How is the concept of abstract thinking used in “the helping, caring, fixing” industry, which claims to understand and describe THINKING as a human behavior? 

Casting doubt on the Obsterical Dilemma / Head too big

Vital to the development of a “potential” human, is the intricate relationship between mother and fetus. There is much more going on than mechanics, both individually and in human evolution. 

__________________________________________________________________

Casting Doubt on a Paradigm /

the energetics-of-gestation-and-growth hypothesis.

on the work of Holly Dunsworth

The obstetrical hypothesis postulates that the demands of an unusual locomotor system (bipedalism) increase the risk and cost of the reproductive process. If this is the case, evolution would favor human birth at earlier stages of development than in other, non-bipedal primates, and mothers with wider hips would experience decreased motor efficiency. (Curious reasoning!)

The obstetrical hypothesis is neat and readily comprehended, which helps explain its widespread acceptance, but new evidence casts doubt on it. A recent paper by Holly Dunsworth of the University of Rhode Island and colleagues reexamines the predictions and evidence supporting the obstetrical hypothesis and suggests an alternative explanation. For instance, human gestation is often said to be short relative to that of other primates, based on how much more growth is needed in neonates (birth – 1 year old) to achieve adult brain size. The shorter duration of gestation on first glance supports a prediction of the obstetrical hypothesis—that birth has evolved to occur earlier in hominids so that the baby is born before its head is too large to pass through the birth canal. Actually, the duration of human pregnancy (38–40 weeks) is absolutely longer than that of chimps, gorillas, and orangutans (32 weeks for chimps and 37–38 weeks for the latter two). When Dunsworth and her colleagues took maternal body size into account, which in primates is positively correlated with gestation length, they showed that human pregnancy is also relatively longer compared to that in great apes. (we are apes!) No wonder that the third trimester seems so long to many pregnant women.

Another oft-cited fact supporting the obstetrical hypothesis is that, of all the primates, human newborns have the least-developed brains. Human babies’ brains are only 30 percent of adult size, as opposed to 40 percent in chimps. This difference in newborn brain size seems to suggest that human babies are born at an earlier developmental stage than other primates.

The catch is that adult brain size in humans is much larger than in other primates for reasons having nothing to do with birth. This means that using adult brain size as a basis for comparing relative gestation length or newborn brain size among primates will underestimate human development. But as one of the collaborators with Dunsworth, Peter Ellison of Harvard University, pointed out in his 2001 book Fertile Ground, the relevant question is,

Given how large a mother’s body size is, how big a brain can she afford to grow in her baby? It is an issue of supply and demand. Labor occurs when the mother can no longer continue to supply the baby’s nutritional and metabolic demands.

As Ellison puts it, “Birth occurs when the fetus starts to starve.” From this perspective, the brain size of a human newborn is not small for a primate but is very large—one standard deviation above the mean. Body size in human newborns is also large relative to other primates when standardized for a mother’s body size. Both facts suggest that pregnancy may push human mothers to their metabolic limits.

__________________________________________________________________________

My two cents: I think that the ‘missing’ factor is sexual selection that has been occurring since the advent of Agriculture-Urbanization: intense selection toward juvenalization has produced childlike / tame females who are fertile at a young age, but are under-equipped physically to support sufficient gestation and childbirth. Also, agricultural products are deficient nutritionally. This “food” problem addresses both skeletal problems, metabolic problems and a modern epidemic of premature birth.

__________________________________________________________________________

The obstetrical hypothesis, in contrast, suggests that locomotion rather than metabolism is the limiting factor in birth size. The underlying concept here is that wider-hipped women—capable of giving birth to larger offspring—should suffer a disadvantage in locomotion. But detailed studies of the cost of running and walking—including new work by Dunsworth’s coauthors Anna G. Warrener of Harvard University and Herman Pontzer of Hunter College—do not support this idea. Men and women are extremely similar in the cost and efficiency of locomotion, regardless of hip width. Enlarging the birth canal to pass a baby with a brain 40 percent of adult size, as is typical of newborn chimps, would require an increase in diameter of only three centimeters—just over an inch—in the smallest dimension of the birth canal. This wouldn’t hinder locomotion significantly, given that many women already have such broad hips. The conflict between big-brained babies and upright walking may be more conceptual than real.

What Does a Baby Cost?

Although the findings showing that human babies are not earlier than other primates are interesting, they still fail to identify what limits baby brain size. Dunsworth and her coauthors propose that the metabolic constraints faced by a mother limit the length of pregnancy and fetal growth. They have dubbed their hypothesis the energetics-of-gestation-and-growth hypothesis.

As the baby grows in both brain and body in the womb, its demand for energy accelerates exponentially. At some point, the mother reaches the limit of her ability to supply the fetus’s demands, and then labor begins. Even following birth, the big-brained, big-bodied newborn needs a loving mother who will continue to feed and care for it while its brain continues to grow at a fetal rate. In the womb, the fetus is basically part of the mother. Once born, the baby is effectively at a higher trophic level than its mother, like a parasite feeding on her, which increases the metabolic demands on her. However, the baby’s needs have shifted to include more long-chain fatty acids, which are key for brain growth. Since these are very efficiently transmitted to the baby through breast milk, rather than through the placenta, moving the baby outside the womb isn’t a problem. (Breast feeding, social rules be damned, is vital to newborns.)

The obstetrical hypothesis is not defunct; it is simply under question. But merely convincing those who were raised intellectually within this paradigm to consider an alternative hypothesis can be challenging. When she gives a talk about the energetics hypothesis, Dunsworth summarizes a conversation that illustrates this challenge:

“What always comes next is, ‘then why doesn’t the pelvis get wider to make childbirth easier?’ And my answer is always, ‘Because it’s good enough. Witness over seven billion humans on the planet.’ But that doesn’t satisfy most people who are moved to ask the question in the first place. And when they argue ‘the tight fit at birth is too much of a coincidence to ignore,’ I ask, ‘Isn’t it just a coincidence that my finger fits perfectly into my nostril?’

She’s right. Evolutionary adaptation doesn’t have to be perfect, just good enough. Perhaps the female pelvis adapted to fit the size of the human fetus’s brain, rather than the female pelvis’s limiting the baby’s brain size. Still, we are left with no clear reason why a baby is such a tight fit in the mother’s birth canal. Pelvic size may be limited by something not yet taken into account in locomotor studies, such as speed, balance, or risk of injury. Or, perhaps simple economy keeps pelvic size close to neonatal brain size. The third alternative is that human childbirth was not always difficult and has only become so as improvements in diet have increased newborn body size. (Or modern neotenic females are less robust than earlier females and less capable of carrying the fetus to complete gestation, but deliver increasingly premature infants and / or require caesarean intervention.) The obstetrical hypothesis and the energetics hypothesis are not mutually exclusive.

The evolutionary conflict that makes human birthing difficult may not be between walking or running and having babies, but between the fetus’s metabolic needs and the mother’s ability to meet them. Perhaps the problem isn’t only having—bearing—a big-brained baby. Perhaps the real problem is making one.

Magical Anthropology / Way back in 1972…

Thirty years ago, something happened that would alter forever our understanding of how humankind came into being. Elaine Morgan was made fearfully cross. An avid reader of popular science books, borrowed from the library in the Welsh valley town of Mountain Ash, the prevailing tenor of the evolutionary debate left her cold.

“They were taking a very aggressive line, suggesting that the whole essence of humanity lies in murder and bloodshed. Also they were taking a terribly macho line, implying that everything evolved to benefit the male hunter. And it had nothing at all to say about children, when if evolution is about anything it’s about ensuring the survival of the child.” (Agreed) Three decades later, her voice still rattles with annoyance. A small woman with an infectious sense of possibility, in 1972 Morgan was not inclined to temper her vexation.

With no scientific training, the 52-year-old mother of three decided to pen a riposte to the grand theorists of the hour, singlehandedly – and single-mindedly championing a hitherto ignored alternative explanation for human evolution called the Aquatic Ape Hypothesis. The Descent of Woman, part feminist polemic, part evolutionary bombshell, became a bestseller, translated into 25 languages and introducing a huge readership to this compelling hypothesis. “But I didn’t start out with the aquatic theory,” she confesses cheerily. “I just thought, ‘There is something wrong with what they are saying now – not only do I not like the feel of it but I think it’s demonstrably nonsense.’ So I just waded in.”

The aquatic theory of human evolution was first advanced by marine biologist Professor Sir Alister Hardy in New Scientist in 1960. (Wikipedia: Sir Alister Clavering Hardy (10 February 1896 – 22 May 1985) was an English marine biologist, an expert on marine ecosystems spanning organisms from zooplankton to whales. Hardy served as zoologist on the RRS Discovery‘s voyage to explore the Antarctic between 1925 and 1927. On the voyage he invented the Continuous Plankton Recorder; it enabled any ship to collect plankton samples during an ordinary voyage. After retiring from his academic work, Hardy founded the Religious Experience Research Centre in 1969.)

He posited what may have happened during the Pliocene epoch, which lasted about five million years and for which no fossil information exists – the “fossil gap”. In an emerging African continent scorched by drought, our ancestors entered the Pliocene as hairy quadrapeds with no language and left it hairless, upright and discussing what kinds of bananas they liked best. What happened in between? Hardy came up with a startling suggestion. (Yikes!) 

It was generally accepted that apes evolved into humans when they were forced because of climate changes to descend from the withering trees to live on the arid savannah. Hardy thought instead that our ancestors’ physiology changed dramatically when a population of woodland apes became isolated on a large island around what is now Ethiopia. Although the waters eventually receded and the apes returned to land, their aquatic adaptions remained. This temporary semi-aquatic existence would explain why humans – genetically so close to the chimpanzee and gorilla – grew to differ from them in so many ways. So would intervention by Ancient Aliens.  

Human beings are the only naked bipeds. We carry a layer of subcutaneous fat substantially thicker than in any other primate. We exude, through our eyes and sweat glands, greater quantities of salt water than any other mammal. We are the only species of mammal to mate face to face, other than aquatic mammals. So do Bonobos; aquatic mammals live in, and are adapted to water environments; humans are not. We are the only primate capable of overriding our unconscious breathing rhythms, alongside the elaborate use of lips and tongue, to produce speech ability which separates us from the rest of the animal kingdom. We are also the only primate with a descended larynx, thought to increase the variety of sounds we can produce.

The usual string of “Just So Stories” factoids that please the obsessive neurotypical need for dumb “fairy tale narratives” instead of testable and provable correlations. 

Hardy argued that these features indicate a level of adaption to an aquatic environment. Thus, humans become bipedal to wade in water, and lost their hair to streamline their bodies for swimming. The fat layer kept them warm and buoyant, their secretions prevented build-up of excess salt from sea water and their larynx was protected against submersion. Language evolved because glare from the water meant signalling was no longer an efficient means of communication. Aye, yai, yai! The typical “backwards” version of evolution, in which the organism, (which de facto, must have access to a godlike supernatural sentience, in order to understand a complex set of relationships between itself and “future” circumstances) “intentionally changes itself” to accommodate a new, different or changing environment.

Morgan was alerted to the hypothesis by a slight reference in The Naked Ape by Desmond Morris. “Conventional wisdom said everything that evolved in humans had done so to benefit the hunter, and if it might disadvantage his wife then she’d just have to trot along,” she says. “He got overheated in the hunt, for example, so he shed his fur, even though she was carrying around a great, fat, slow-developing baby that needed fur to cling onto. (Gibberish)

“Everything that was different about females was supposed to be a departure from the norm and its purpose was to lure her mate so that he would kindly give her a lump of meat.”

Hardy’s theory had been ignored by the scientific establishment. “Nobody had developed it or stood up against it. It had sunk like a stone. But as soon as I read it I thought, ‘Well obviously this is the answer to everything, why has nobody told me about it?’.” (A totally non-scientific reaction!)

At this time, Morgan was a successful television scriptwriter, winner of several Baftas, a Writers’ Guild award and a Prix Italia for her film about Joey Deacon, the disabled fundraiser. She had stopped her science studies after O level. Nonetheless, she wrote to Hardy asking if she could quote his theory. He agreed, and she sent the eventual product to her agent.

“He rang me up and said, ‘Elaine, you sat down and you wrote this book?’,” she recalls gleefully. “And the first publisher he sent it to snapped it up!” (A sure sign of scientific validity!)

What made The Descent of Woman doubly revolutionary was the way that Morgan wrote. She referred to our ancestors as “she” and considered the development of pendulous breasts and rounded buttocks without the context of sexual attraction. She also wrote in detail about the female orgasm, examining whether face-to-face mating serviced both clitoral and vaginal stimulation. It was the earliest days of the feminist movement, and women across the world were captivated.

Yikes! By doing “bad pop-science writing” she hoped to advance the proper notion that evolution includes  females? Thanks a lot!

“Up to then women had been afraid of science. They were told biology is destiny, and those arguments had been used to hold them down, and suddenly they could talk about it. Women’s lib was just taking off then, and I got to meet some of them, like Gloria Steinem. But they were very wary of me because I was old enough to be their mother and their mother was their enemy. I’d got married, brought up children, and this was no great way to break the system, they thought.” Wow! Bitchy fem on fem cliches and massive narcissism do not help the cause of female Homo sapiens!

She was never really part of the sisterhood, she giggles, surely aware of how significantly she contributed to it. “They were all tall and young and metropolitan, and I was none of these things. I liked them, but after that book they were racing ahead, feminism took off, and I thought, ‘I don’t need to say any more about that now’.”

The liberation of her gender secured, (bizarre conclusion, of course) she applied her campaigning vim to the science itself. Morgan is the first to admit that The Descent of Woman was a thoroughly unscientific romp riddled with errors and convenient conclusions. In the years after its publication, she set about a process of self-education that resulted in the more soberly executed Aquatic Ape Hypothesis, published in 1997.

It had become increasingly important to her to write something that would appeal to scientists. “The establishment had treated me with total horror and contempt, and also some resentment because it was a bestseller. I was an upstart, in it for the money, totally ignorant.” Well, by her own admission, she was.

She believes the upset was greater because she was contradicting those with a vested interest in the status quo. “If it had come from somewhere in their own seminars, they could have steered it along and had some input and got some credit for it.” The scientific community’s refusal to engage with her arguments remains a frustration.

Her methodology for recreating herself as a credible scientist was certainly individual. “I just starting reading the books that were available from Mountain Ash library,” she explains. “I’d look in the back at the bibliography and then I’d send away for those books. And then I’d look in their bibliographies, and that would be of an even higher academic standard. So if there was something I wanted to know about, like the larynx or the skin, I would work my way back to bedrock that way and find out what the original basis for the claim was.”

Of late, Morgan has garnered some high profile support. In his book Consciousness Explained, the American philosopher Daniel Dennet wrote: “When in the company of distinguished biologists, evolutionary theorists and other experts, I have often asked them to tell me, please, why Elaine Morgan must be wrong. I haven’t yet had a reply worth mentioning.” Wow. How feeble. Typical NT delusion that “no response” is proof of “proof”!

Sir David Attenborough used his recent presidency of the British Association for the Advancement of Science to organise the first full day discussion of Morgan’s “engaging” theory. “The one big difficulty is that there is no direct fossil evidence,” he says. “And if you postulate that humans were wandering in the delta for a sufficient length of time to modify then you would think that you would come across fossils, because it’s the ideal environment for them.” More nonsense; of course, a lack of “fossil evidence” (or any physical evidence) has never been an obstacle for anthropologists.  

“It’s just the drip, drip, drip of the number of facts that could make sense in that context and are still making no sense at all outside it,” says Morgan. “I think an unknown number of scientists don’t need convincing but just need enough encouragement to stand up and be counted.” Neurotypicals believe that “science” is just another social belief system: get enough “scientists” to “agree” and BINGO – your narrative (TV script) becomes bona fide science. 

There is inevitably a problem of proof, she adds, given that the hypothesis relies on soft tissue adaptations, which don’t fossilise. This lack of direct evidence concerns many scientists, says Peter Wheeler, professor of evolutionary biology at Liverpool John Moores university. “What is often said by proponents of the aquatic ape theory is that no one has looked at it seriously. The truth is that it has been considered and found completely wanting.”

In addition to the absence of fossils, says Wheeler, the hypothesis relies upon superficial comparisons between living species which don’t bear scrutiny. For example, although humans are fatty mammals their fat is distributed in an entirely different way to aquatic mammals. Nor are the majority of aquatic animals naked. Most have dense fur and only the largest and deepest diving are hairless. “Nor do you need the aquatic ape to explain bipedalism. There are about four other more convincing theories. You don’t need that extra complication.”

Whatever the truth, Morgan says that her championing of the aquatic ape hypothesis is over. Her next book, close to completion, Darwin and the Left, argues that advances in genetics and evolutionary biology are moving scientific debate to the right.

It is a while since she has seen an ape in the wild, she says. “I’m too old. I can’t see myself getting to the jungle now, but I still get an awful lot of mail. It’s been an undying interest – something to wake up for.”

Her three grown-up sons think she’s right, she says proudly, and her late husband Morien was similarly supportive. “He was a bit disconcerted at the beginning, especially in an area like this. It wasn’t the kind of thing somebody’s wife did. But he did all the typing and all that side of it, so I’ve been working a lot slower in the last six years [since his death].” Support by one’s family does not constitute scientific credibility. We might ask if a male would “use” family in this way; why is it considered important that “the husband” was behind her efforts?

She baulks at the suggestion that she’s a myth maker, propagating nonsense to deluded undergraduates. “I’m not telling Just So Stories. I’m not. (That’s EXACTLY what she does) There are very few books that have less of the subjunctive in them than mine. I’m just saying these are the facts, this is one possible explanation, draw your own conclusion.”

She rallies. People still get the basics of the theory wrong, she laughs. “A woman wrote to the Aberdare Leader [Morgan’s local paper] and said, ‘She’s mad, she thinks we’re descended from fish’. But then the New York Times once wrote that I thought we were descended from otters. Which goes to show you don’t have to live in Aberdare to get science wrong.” (The “idiot reaction” by random humans does not confer legitimacy to a “theory”)

A brilliant career (Really?)

Morgan grew up in Pontypridd, where her father worked as a miner. She attended school locally and won an exhibition to Lady Margaret Hall, Oxford university, where she studied English language and literature. She married her husband Morien, a French teacher, in 1945. They had two sons, and adopted a third when he was six weeks old.

In 1952, she sold her first play to television. It was called Mirror, Mirror, and she remembers it as being “very basic”. She went on to win 10 awards for her screen-writing, culminating in a writer of the year award in 1980.

Then she became interested in the aquatic ape hypothesis – the idea that many of the things that make us human (such as speech, a lack of thick fur, and walking upright) evolved during a 10 million year period when Africa became very wet, and our ancestors were forced to spend a lot of time in the water.

She published The Descent of Woman in 1972, The Aquatic Ape in 1982, The Scars of Evolution in 1990, The Descent of the Child in 1994, and The Aquatic Ape Hypothesis in 1997. Morgan has long been a darling of the feminist movement, but in recent years her supporters have come to include people such as Sir David Attenborough and the American philosopher Daniel Dennet (Whoopdee do!) 

Shoddy Psychology Study Fails / The Reproducibility Project

From the Atlantic: Publishing shoddy psychology studies is a pervasive practice:  rationalizing unscientific behavior as “not all that bad” is journalistic fraud. Ed Yong is a noted science writer, and I’m a bit shocked at his pandering to the “psychology industry.” Most alarming is the failure to recognize that failed psychological theories, which form the basis of diagnosis and treatment, have harmed, and continue to harm, REAL LIVE PEOPLE.

How Reliable Are Psychology Studies?

A new study shows that the field suffers from a reproducibility problem, but the extent of the issue is still hard to nail down.

  • Ed Yong
  • Aug 27, 2015, The Atlantic

No one is entirely clear on how Brian Nosek pulled it off, including Nosek himself. Over the last three years, the psychologist from the University of Virginia persuaded some 270 of his peers to channel their free time into repeating 100 published psychological experiments to see if they could get the same results a second time around. There would be no glory, no empirical eurekas, no breaking of fresh ground. Instead, this initiative—the Reproducibility Project—would be the first big systematic attempt to answer questions that have been vexing psychologists for years, if not decades. What proportion of results in their field are reliable? (If psychologists are so concerned, why have they been defending sloppy methods and “religious” premises for decades?)

A few signs hinted that the reliable proportion might be unnervingly small. Psychology has been recently rocked by several high-profile controversies, including: the publication of studies that documented impossible effects like precognition, failures to replicate the results of classic textbook experiments, and some prominent cases of outright fraud.

The causes of such problems have been well-documented. Like many sciences, psychology suffers from publication bias, where journals tend to only publish positive results (that is, those that confirm the researchers’ hypothesis), and negative results are left to linger in file drawers. On top of that, several questionable practices have become common, even accepted. A researcher might, for example, check to see if they had a statistically significant result before deciding whether to collect more data. Or they might only report the results of “successful” experiments. These acts, known colloquially as p-hacking, are attempts to torture positive results out of ambiguous data. They may be done innocuously, but they flood the literature with snazzy but ultimately false “discoveries.” (Innocuously? How does one cook the books without being aware that one is doing so?)

In the last few years, psychologists have become increasingly aware of, and unsettled by, these problems. Some have created an informal movement to draw attention to the “reproducibility crisis” that threatens the credibility of their field. Others have argued that no such crisis exists, and accused critics of being second-stringers and bullies, (here come the social excuses) and of favoring joyless grousing over important science. In the midst of this often acrimonious debate, Nosek has always been a level-headed figure, who gained the respect of both sides. As such, the results of the Reproducibility Project, published today in Science, have been hotly anticipated. (We cannot assume that Nosek is unbiased)

They make for grim reading. Although 97 percent of the 100 studies originally reported statistically significant results, just 36 percent of the replications did.

Does this mean that only a third of psychology results are “true”? Not quite. A result is typically said to be statistically significant if its p-value is less than 0.05—briefly, this means that if you did the study again, your odds of fluking your way to the same results (or better) would be less than 1 in 20. This creates a sharp cut-off at an arbitrary (some would say meaningless) threshold, in which an experiment that skirts over the 0.05 benchmark is somehow magically more “successful” than one that just fails to meet it. (Apply math to garbage – you get garbage.)

So Nosek’s team looked beyond statistical significance. They also considered the effect sizes of the studies. These measure the strength of a phenomenon; if your experiment shows that red lights make people angry, the effect size tells you how much angrier they get. And again, the results were worrisome. On average, the effect sizes of the replications were half those of the originals.

“The success rate is lower than I would have thought,” says John Ioannidis from Stanford University, whose classic theoretical paper Why Most Published Research Findings are False has been a lightning rod for the reproducibility movement. “I feel bad to see that some of my predictions have been validated. I wish they’d been proven wrong.” This is a social statement; a “white lie.” 

Nosek, a self-described “congenital optimist,” is less upset. The results aren’t great, but he takes them as a sign that psychologists are leading the way in tackling these problems. “It has been a fantastic experience, all this common energy around a very specific goal,” he says. “The collaborators all contributed their time to the project knowing that they wouldn’t get any credit for being 253rd author.” Another social statement; not about the problem, but “how fun” it was – and how “socially tuned to reward” the participants are.

There are many reasons why two attempts to run the same experiment might produce different results. (Let’s rationalize – soften, undo, explain away- appalling institutional behavior)

Jason Mitchell from Harvard University, who has written critically about the replication movement, agrees. “The work is heroic,” he says. “The sheer number of people involved and the care with which it was carried out is just astonishing. This is an example of science working as it should in being very self-critical and questioning everything, especially its own assumptions, methods, and findings.” (Says nothing concrete: another social statement -ass-kissing)

But even though the project is historic in scope, its results are still hard to interpret. (REALLY? ) Let’s say that only a third of studies are replicable. What does that mean? It seems low, but is it? “Science needs to involve taking risks and pushing frontiers, so even an optimal science will generate false positives,” says Sanjay Srivastava, an associate professor of psychology at the University of Oregon. “If 36 percent of replications are getting statistically significant results, it is not at all clear what that number should be.” (That is – IT’S ARBITRARY)

It is similarly hard to interpret failed replications. Consider the paper’s most controversial finding: that studies from cognitive psychology (which looks at attention, memory, learning, and the like) were twice as likely to replicate as those from social psychology (which looks at how people influence each other). “It was, for me, inconvenient,” says Nosek. “It encourages squabbling. Now you’ll get cognitive people saying ‘Social’s a problem’ and social psychologists saying, ‘You jerks!’” (That is, the results must be “socially acceptable” to the “psychology community” – no hurt feelings! Do proper science, and a lot of people are going to be unhappy.)

Nosek explains that the effect sizes from both disciplines declined with replication; it’s just that cognitive experiments find larger effects than social ones to begin with, because social psychologists wrestle with problems that are more sensitive to context.  (Especially when the “context” is imaginary, as we see in autism / Asperger studies) “How the eye works is probably very consistent across people but how people react to self-esteem threat will vary a lot,” says Nosek. Cognitive experiments also tend to test the same people under different conditions (a within-subject design) while social experiments tend to compare different people under different conditions (a between-subject design). Again, people vary so much that social-psychology experiments can struggle to find signals amid the noise. (No problem: Just make them up!)

More generally, failed replications don’t discredit the original studies, any more than successful ones enshrine them as truth. There are many reasons why two attempts to run the same experiment might produce different results. There’s random chance. The original might be flawed. So might the replication. There could be subtle differences in the people who volunteered for both experiments, or the way in which those experiments were done. And, to be blunt, the replicating team might simply lack nous or technical skill to pull off the original experiments.

Indeed, Jason Mitchell wonders how good the Reproducibility Project’s consortium would be at replicating well-known phenomena, like the Stroop effect (people take longer to name the color of a word if it is printed in mismatching ink) or the endowment effect (people place more value on things they own). “Would it be better than 36 percent or worse? We don’t know and that’s the problem,” he says. “We can’t interpret whether 36 percent is good, bad, or right on the money.”

The very notion that there is a “correct” percentage of reproducible studies is so UNSCIENTIFIC that it reveals the lack of science-based activity in psychology: this belief renders the entire field “superstitious.” A study is reproducible or it isn’t: to believe that is some number of “reproducible” studies “justifies” what you are doing, is utter nonsense. 

Mitchell also worries that the kind of researchers who are drawn to this kind of project may be biased towards “disproving” the original findings. How could you tell if they are “unconsciously sabotaging their own replication efforts to bring about the (negative) result they prefer?” he asks. (Another social statement – )

In several ways, according to Nosek. Most of the replicators worked with the scientists behind the original studies, who provided materials, advice, and support—only 3 out of 100 refused to help. (This proves nothing) The teams pre-registered their plans—that is, they decided on every detail of their methods and analyses beforehand to remove the possibility of p-hacking. Nosek also stopped the teams from following vendettas (Wow! There’s a revealing statement of personality and character) by offering them a limited buffet of studies to pick from: only those published in the first issue of three major psychology journals in 2008. Finally, he says that most of the teams that failed to replicate their assigned studies were surprised—even disappointed. “Anecdotally, I observed that as they were assigned to a task, they got invested in their particular effect,” says Nosek. “They got excited. Most of them expected theirs to work out.” Again – a social statement meant to support the results, but having nothing to do with the actual quality of work. What are these people, 5 year-olds?)

“Journals, funders, and scientists are paying a lot more attention to replication, to statistical power, to p-hacking, all of it.”

And yet, they largely didn’t. “This was surprising to most people,” says Nosek. “This doesn’t mean the originals are wrong or false positives. There may be other reasons why they didn’t replicate, but this does mean that we don’t understand those reasons as well as we think we do. We can’t ignore that. We have data that says: We can do better.” (Are you kidding? Denial, denial, denial.)

What does doing better look like? To Dorothy Bishop, a professor of developmental neuropsychology at the University of Oxford, it begins with public pre-registration of research plans. “Simply put, if you are required to specify in advance what your hypothesis is and how you plan to test it, then there is no wiggle room for cherry-picking the most eye-catching results after you have done the study,” she says. (And what if “cheaters” are caught? Do they get sent to time-out?) Psychologists should also make more efforts to run larger studies, which are less likely to throw up spurious results by chance. Geneticists, Bishop says, learned this lesson after many early genetic variants that were linked to human diseases and traits turned out to be phantoms; their solution was to join forces to do large collaborative studies, involving many institutes and huge numbers of volunteers. These steps would reduce the number of false positives that marble the literature.

To help detect the ones that slip through, researchers could describe their methods in more detail, and upload any materials or code to open databases, making it trivially easy for others to check their work. “We also need to be better at amassing the information we already have,” adds Bobbie Spellman from the University of Virginia. Scientists already check each other’s work as part of their daily practice, she says. But that much of that effort is invisible to the wider world because journals have been loath to publish the results of replications. (You cuddle my data, I’ll cuddle yours.)

Change is already in the air. “Journals, funders, and scientists are paying a lot more attention to replication, to statistical power, to p-hacking, all of it,” says Srivastava. He notes that the studies that were targeted in the Reproducibility Project all come from a time before these changes. “Has psychology learned and gotten better?” he wonders.

One would hope so. After all, several journals have started to publish the results of pre-registered studies. In a few cases, scientists from many labs have worked together to jointly replicate controversial earlier studies. Meanwhile, Nosek’s own brainchild, the Center for Open Science established in 2013, has been busy developing standards for transparency and openness. It is also channelling $1 million of funding into a pre-registration challenge, where the first 1,000 teams who pre-register and publish their studies will receive $1,000 awards. “It’s to stimulate people to try pre-registration for the first time,” he says. (This is like High School for drop outs – get extra credit for behavior that you ought to have displayed from the start.)

The Center is also working with scientists from other fields, including ecology and computer science, to address their own concerns about reproducibility. Nosek’s colleague Tim Errington, for example, is leading an effort to replicate the results of 50 high-profile cancer biology studies. “I really hope that this isn’t a one-off but a maturing area of research in its own right,” Nosek says.

$$$$$$$$$$$$$$$$

That’s all in the future, though. For now?I will be having a drink,” he says. (Stat quo.)

 

Every Asperger Needs to Read this Paper! / Symptoms of entrapment and captivity

Research that supports my challenge to contemporary (American) psychology that Asperger symptoms are the result of “captivity” and not “defective brains” 

From: Depression Research and Treatment

Depress Res Treat. 2010; 2010: 501782. Published online 2010 Nov 4. doi:  10.1155/2010/501782 PMCID: PMC2989705

Full Article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2989705/

Testing a German Adaption of the Entrapment Scale and Assessing the Relation to Depression

Manuel Trachsel, 1 ,* Tobias Krieger, 2 Paul Gilbert, 3 and Martin Grosse Holtforth 2 :

Abstract

The construct of entrapment is used in evolutionary theory to explain the etiology of depression. The perception of entrapment can emerge when defeated individuals want to escape but are incapable. Studies have shown relationships of entrapment to depression, and suicidal tendencies. The aim of this study was a psychometric evaluation and validation of the Entrapment Scale in German (ES-D). 540 normal subjects completed the ES-D along with other measures of depressive symptoms, hopelessness, and distress. Good reliability and validity of the ES-D was demonstrated. Further, whereas entrapment originally has been regarded as a two-dimensional construct, our analyses supported a single-factor model. Entrapment explained variance in depressive symptoms beyond that explained by stress and hopelessness supporting the relevance of the construct for depression research. These findings are discussed with regard to their theoretical implications as well as to the future use of the entrapment scale in clinical research and practice.

Being outnumbered by social humans, 99% to 1%, is de facto defeat and captivity

1. Introduction

Assuming a certain degree of adaptivity of behavior and emotion, evolutionary theorists have suggested various functions of moodiness and depression. Whereas adaptive mechanisms may become functionally maladaptive [1, 2], there have been many attempts to explain potentially adaptive functions of depression. For example, Price [3] suggested that depression evolved from the strategic importance of having a de-escalating or losing strategy. Social rank theory [4, 5] built on this and suggests that some aspects of depression, such as mood and drive variations, may have evolved as mechanisms for regulating behavior in contexts of conflicts and competition for resources and mates. Hence, subordinates are sensitive to down rank threats and are less confident than dominants, while those who are defeated will seek to avoid those who defeated them. Depression may also serve the function to help individuals disengage from unattainable goals and deal with losses [6]. 

Social rank theory (e.g., [4]) links defeat states to depression. Drawing on Dixon’s arrested defences model of mood variation [7, 8], this theory suggests that especially when stresses associated with social defeats and social threats arise, individuals are automatically orientated to fight, flight or both. Usually, either of those defensive behaviors will work. So, flight and escape remove the individual from the conditions in which stress is arising (e.g., threats from a dominant), or anger/aggression curtails the threat. These defensive behaviors typically work for nonhuman animals. However, for humans, such basic fight and flight strategies may be less effective facing the relatively novel problems of living in modern societies, perhaps explaining the prevalence of disorders such as depression [8]. Dixon suggested that in depression, defensive behaviors can be highly aroused but also blocked and arrested and in this situation depression ensues. Dixon et al. [8] called this arrested flight. For example, in lizards, being defeated but able to escape has proven to be less problematic than being defeated and being trapped. Those who are in caged conditions, where escape is impossible, are at risk of depression and even death [9]. Gilbert [4, 10] and Gilbert and Allan [5] noted that depressed individuals commonly verbalize strong escape wishes and that feelings of entrapment and desires to escape have also been strongly linked to suicide, according to O’Connor [11]. In addition they may also have strong feelings of anger or resentment that they find difficult to express or become frightening to them. (Or are NOT ALLOWED to express, without being punished) 

Gilbert [4] and Gilbert and Allan [5] proposed that a variety of situations (not just interpersonal conflicts) that produce feeling of defeat, or uncontrollable stress, which stimulate strong escape desires but also makes it impossible for an individual to escape, lead the individual to a perception of entrapment. They defined entrapment as a desire to escape from the current situation in combination with the perception that all possibilities to overcome a given situation are blocked. Thus, theoretically entrapment follows defeat if the individual is not able to escape. This inability may be due to a dominant subject who does not offer propitiatory gestures following antagonistic competition, or if the individual keeps being attacked. (Relentless social bullying) 

In contrast to individuals who feel helpless (cf. the concept of learned helplessness [12]), which focus on perceptions of control, the entrapped model focuses on the outputs of the threat system emanating from areas such as the amygdala [13]. In addition, depressed people are still highly motivated and would like to change their situation or mood state. It was also argued that, unlike helplessness, entrapment takes into account the social forces that lead to depressive symptoms, which is important for group-living species with dominance hierarchies such as human beings [14]. Empirical findings by Holden and Fekken [15] support this assumption. Gilbert [4] argued that the construct of entrapment may explain the etiology of depression better than learned helplessness, because according to the theory of learned helplessness, helpless individuals have already lost their flight motivation whereas entrapped individuals have not.

According to Gilbert [4], the perception of entrapment can be triggered, increased, and maintained by external factors but also internal processes such as intrusive, unwanted thoughts and ruminations can play an important role (e.g., [16, 17]). For example, ruminating on the sense of defeat or inferiority may act as an internal signal of down-rank attack that makes an individual feel increasingly inferior and defeated. Such rumination may occur despite the fact that an individual successfully escaped from an entrapping external situation because of feelings of failure, which may cause a feeling of internal entrapment. For example, Sturman and Mongrain [18] found that internal entrapment increased following an athletic defeat. Moreover, thoughts and feelings like “internal dominants” in self-critics may exist that can also activate defensive behaviors.

For the empirical assessment of entrapment, Gilbert and Allan [5] developed the self-report Entrapment Scale (ES) and demonstrated its reliability. Using the ES, several studies have shown that the perception of entrapment is strongly related to low mood, anhedonia, and depression [5, 1921]. Sturman and Mongrain [22] found that entrapment was a significant predictor of recurrence of major depression. Further, Allan and Gilbert [23] found that entrapment relates to increased feelings of anger and to a lower expression of these feelings. In a study by Martin et al. [24], the perception of entrapment was associated with feelings of shame, but not with feelings of guilt. Investigating the temporal connection between depression and entrapment, Goldstein and Willner [25, 26] concluded that the relation between depression and entrapment is equivocal and might be bilateral; that is, entrapment may lead to depression and vice versa.

Entrapment was further used as a construct explaining suicidal tendency. In their cry-of pain-model, Williams and Pollock [27, 28] argued that suicidal behavior should be seen as a cry of pain rather than as a cry for help. Consistent with the concept of arrested flight, they proposed that suicidal behavior is reactive. In their model, the response (the cry) to a situation is supposed to have the following three components: defeat, no escape potential, and no rescue. O’Connor [11] provided empirical support in a case control study by comparing suicidal patients and matched hospital controls on measures of affect, stress, and posttraumatic stress. The authors hypothesized that the copresence of all three cry-of-pain variables primes an individual for suicidal behavior. The suicidal patients, with respect to a recent stressful event, reported significantly higher levels of defeat, lower levels of escape potential, and lower levels of rescue than the controls. Furthermore, Rasmussen et al. [21] showed that entrapment strongly mediated the relationship between defeat and suicidal ideation in a sample of first-time and repeated self-harming patients. Nevertheless, there has also been some criticism of the concept of entrapment as it is derived from animal literature [29].

To our knowledge so far, there is no data on the retest reliability or the temporal stability of the Entrapment Scale. Because entrapment is seen as a state-like rather than a trait-like construct, its stability is likely dependent on the stability of its causes. (Remove the social terrorism, or remove yourself) Therefore, if the causes of entrapment are stable (e.g., a long-lasting abusive relationship), then also entrapment will remain stable over time. In contrast, for the Beck Hopelessness Scale (BHS), there are studies assessing temporal stability that have yielded stable trait-like components of hopelessness [30]. Young and coworkers [30] stated that the high stability of hopelessness is a crucial predictor of depressive relapses and suicidal attempts. For the Perceived Stress Questionnaire (PSQ), there are studies examining retest reliability. The PSQ has shown high retest reliability over 13 days (r = .80) in a Spanish sample [31]. It is to be expected that with longer retest intervals as in the present study (3 months), the stability of perceived stress will be substantially lower. We, therefore, expect the stability of entrapment to be higher than that of perceived stress as a state-like construct, but lower than that of hopelessness, which has been shown to be more trait-like [32].

Previous research is equivocal regarding the dimensionality of the entrapment construct. Internal and external entrapment were originally conceived as two separate constructs (cf. [5]) and were widely assessed using two subscales measuring entrapment caused by situations and other people (e.g., “I feel trapped by other people”) or by one’s own limitations (e.g., “I want to get away from myself”). The scores of the two subscales were averaged to result in a total entrapment score in many studies. However as Taylor et al. [33] have shown, entrapment may be best conceptualized as a unidimensional construct. This reasoning is supported by the observation that some of the items of the ES cannot easily be classified either as internal or external entrapment and because the corresponding subscales lack face validity (e.g., “I am in a situation I feel trapped in” or “I can see no way out of my current situation”).

5. Discussion

The entrapment construct embeds depressiveness theoretically into an evolutionary context. The situation of arrested flight or blocked escape, in which a defeated individual is incapable of escaping despite a maintained motivation to escape, may lead to the perception of entrapment in affected individuals [8]. In this study, the Entrapment Scale (ES) was translated to German (ES-D), tested psychometrically, and validated by associations with other measures. This study provides evidence that the ES-D is a reliable self-report measure of entrapment demonstrating high internal consistency. The study also shows that the ES-D is a valid measure that relates to other similar constructs like hopelessness, depressive symptoms or perceived stress. Levels of entrapment as measured with the ES-D were associated with depressiveness, perceived stress, and hopelessness, showing moderate to high correlations. Results were consistent with those obtained by Gilbert and Allan [5]. Entrapment explained additional variance in depressiveness beyond that explained by stress and hopelessness. Taken together, the present data support the conception of entrapment as a relevant and distinct construct in the explanation of depression. (And much of Asperger behavior)

The results of our study confirm the findings of Taylor et al. [33], thereby showing that entrapment is only theoretically, but not empirically, separable into internal and external sources of entrapment. The authors even went further by showing that entrapment and defeat could represent a single construct. Although in this study the defeat scale [5] was not included, the results are in line with the assumption of Taylor et al. [33] and support other studies using entrapment a priori as a single construct. However, although this study supports the general idea that escape motivation affects both internal and external events and depression, clinically it can be very important to distinguish between them. For example, in studies of psychosis entrapment can be very focused on internal stimuli, particularly voices [47].

The state conceptualization of entrapment implies that the perception of entrapment may change over time. Therefore, we did not expect retest correlations as high as retest correlations for more trait-like constructs like hopelessness [32]. Since the correlation over time is generally a function of both the reliability of the measure and the stability of the construct, high reliability is a necessary condition for high stability [48]. In this study, we showed that the ES-D is a reliable scale, and we considered retest correlations as an indicator for stability. The intraclass correlation of .67 suggests that entrapment is more sensitive to change than hopelessness (r = .82). Furthermore, the state of entrapment seems to be more stable than perceived stress, which may be influenced to a greater extent by external factors. Given the confirmed reliability and validity of the ES-D in this study, we therefore cautiously conclude that entrapment lies between hopelessness and perceived stress regarding stability.

Whereas the high correlation between entrapment and depressive symptoms in this study may be interpreted as evidence of conceptual equivalence, an examination of the item wordings of two scales clearly suggest that these questionnaires assess distinct constructs. However, the causal direction of this bivariate relation is not clear. Theoretically, both directions are plausible. Entrapment may be a cause or a consequence of depressive symptoms, or even both. Unfortunately, studies examining the temporal precedence so far have yielded equivocal results and have methodological shortcomings (e.g., no clinical samples, only mild and transitory depression and entrapment scores with musical mood induction) in order to answer this question conclusively [25, 26]. It remains unclear whether entrapment only is depression specific. Entrapment might not only be associated with depression, but also with other psychological symptoms, or even psychopathology in general. This interpretation is supported by research showing a relation between distress arising from voices and entrapment in psychotic patients [49, 50]. Furthermore, other studies show the relation between entrapment and depressive symptoms [5153] and social anxiety and shame [54] in psychosis. The usefulness of entrapment as a construct for explaining psychopathologies in humans has been questioned [29]. Due to the present study, it is now possible to investigate entrapment in psychopathology in the German speaking area.

Modern social humans and the social hierarchy: Driving Asperger types crazy for thousands of years!

 

Self Awareness / OMG What a Hornet’s Nest

What made me awaken this morning with the question of self awareness dancing in my head? It’s both a personal and social question and quest, and so almost impossible to think about objectively. And like so many “word concepts” there is no agreed-upon definition or meaning to actually talk about, unless it’s among religionists of certain beliefs, philosophical schools of knowledge, or neurologists hunched over their arrays of brain tissue, peering like haruspices over a pile of pink meat.

My own prejudices lean toward two basic underpinnings of self-awareness:

1. It is not a “thing” but an experience.

2. Self awareness (beyond Look! It’s me in the mirror…) is learned, earned, created, achieved.

From a previous post –

Co-consciousness; the product of language : “In Western cultures verbal language is inseparable from the process of creating a conscious human being.

A child is told who it is, where it belongs, and how to behave, day in and day out, from birth throughout childhood. In this way culturally-approved patterns of thought and behavior are implanted, organized and strengthened in the child’s brain. 

Social education means setting tasks that require following directions, and asking children to ‘correctly’ answer with words and behavior, to prove that co-consciousness is in place.

This is one of the great challenges of human development, and children who do not ‘pay attention’ to adult demands, however deftly sugar-coated, are rejected as defective, defiant, and diseased.

Punishment for having early self awareness may be physical or emotional brutality or abandonment and exile from the group.”

Who am I? is a question that most children ask sooner or later – prompted obviously by questions from adults (no child is born thinking about this) such as “What do you want to be when you grow up?” (Not, Who are you now?) The socially acceptable menu is small: “A famous sports star” for boys, ” For girls? “A wonderful mom and career woman who looks 16 years old, forever”.

How boring and unrealistic. How life and joy killing. Adults mustn’t let children in on the truth, which is even worse. We know at this point that a child can look in a mirror and say, “That’s me! I hate my haircut,” but he or she is entirely unaware that someday firing rockets into mud brick houses, thereby blowing human bodies to smithereens, may be their passion. Or she may be a single mom with three kids, totally unprepared for an adequate job. Or perhaps he or she may end up addicted to pills and rage and stuffing paper bags with French fries eight hours a day.

If a child were to utter these reasonably probabilistic goals, he or she would be labeled as disturbed and possibly dangerous. And yet human children grow up to be less than ideal, and many  dreadful outcomes occur, but these are the result of the individual colliding with societal fantasies and promises that are not likely outcomes at all.

The strangest part of this is that we talk about self awareness as a “thing” tucked into a hidden space, deep with us, but it isn’t. It is a running score on a test, that once we are born, starts running: the test questions are life’s demands, both from the environment into which we are born, and the culture of family, school, work and citizenship. The tragedy is that few caregivers bother to find out enough about a child to guide them toward a healthy and happy self-awareness. This requires observing and accepting the child’s native gifts and personality, AND helping them to manage their difficulties. This is not the same as curing them of being different, or inflicting life long scars by abandoning them, or diligent training so that like parrots, they can mimic conformist behavior and speech.

Self awareness comes as we live our lives: self-esteem is connected to that process, not as a “before” thing, but an “after” thing: a result of meeting life as it really is, not as a social fantasy. Self awareness is built from the talents and strengths that we didn’t know  we possessed. It also arises as we see the “world” as its pretentions crumble before us. Being able to see one’s existence cast against the immensity of reality, and yet to feel secure, is the measure of finally giving birth to a “self”. 

 

 

 

I’m satisfied that loving the land is my talent and that this is not a small thing, when there are so many human beings who don’t.