Intuition / Unconscious Processing

The term intuition, like “consciousness” is used everyday, but without much concern over what it refers to.

These graphics illustrate the popular conception that places intuition in the heart and sets up an adversarial relationship with the brain as the ‘rational’ organ that contains all that icky stuff like logic, math, analysis, facts, science, and reason. A ridiculous war between emotion and reason exists in popular culture; it’s an archaic notion. This artificial conflict has been applied in pop science to the supposedly logical male brain vs. the supposedly intuitive female brain, which supposedly arise from the brain being chopped into a right and left half in half by the corpus collossum. If individuals are truly “left brained” or “right brained” then why is every healthy infant born with a whole brain? In evolution, you lose what you don’t use, especially in the brain, which is an energy monster – 20% of a human’s daily energy supply goes to keeping the brain alive and working. This type of black or white thinking is how modern social typicals perceive the world: opposition and competition trump interaction and cooperation.

imagesKU9128DE imagesK9FR9NI2

A gruesome reminder of the place of the heart in sacrifice and cannibalism. Eating the heart of the victim transferred its magical properties to the consumer.

A gruesome reminder of the place of the heart in sacrifice and cannibalism. Eating the heart of the victim transferred its magical properties to the consumer.

From The Heart of the Matter by Robert K. Blechman: “What appeared to classical societies as a characteristic of human anatomy has slowly, over the ages, become a metaphor. No one today believes that the seat of consciousness can be found in the heart or lungs and yet the references persist. In primary orality people assigned different aspects of cognition to different organs. We are content in most cases to let the heart represent all feeling or emotion. When placed in opposition to the head or intellect, the heart prevails in modern cultural references as the deeper source of wisdom and the more reliable arbiter of reality.”

Misconceptions and superstitions about human anatomy, the functions of the brain, the electromagnetic spectrum and scientific principles in general, continue to spread like a Medieval pox throughout the internet, couched in faux scientific text. This is a favorite activity of New and Old Age religionists. 

What then, is intuition? It is simply brain activity that takes place without words. 

That level thing again!

That level thing again!

Here is a good start: It’s a type of thinking that we share with animals (we are animals) and which predates verbal language. It is the “unconscious” manipulation of information.  Consciousness (see previous posts) is created by verbal language. Therefore the “unconscious, sub-conscious”  is not a dungeon in the bottom of the brain, a basement or lower level, a Hell that contains bad things. It is simply brain activity-processing that takes place without words. The conscious and unconscious states are not cut off from each other: they communicate via electrical activity. Intuition is viewed as more trustworthy than conscious thinking because it is FACT BASED (facts aren’t limited to expression in words) and draws on PATTERNS tested by eons of natural selection – instinct.

 

Advertisements

Success Story / Asperger Females Disguised as Neurotypicals

From the Daily Mail: 

“They are successful career women in loving relationships – and they all live with an affliction that will surprise you!”

by Jill Foster for the Daily Mail, August 2012

1960s Pre-Asperger Disorder / Meds for Social Anxiety

Advertisement from the 1960s that promotes the drug Serentil as a solution for the anxiety that comes from not fitting in; for the type of patient who “responds with excessive anxiety” and “can’t make friends.” What used to be common social fact – awkwardness on the part of some individuals when among groups of people, becomes pathological and must be treated with drugs. Anxiety in a person “indicates” a disordered personality. By 1994, “social pathology” becomes basic to Autism and Asperger’s.

No mention is made that Serentil is a dangerous anti-psychotic. See warning below.

Note the petty social crimes that are judged to be pathological behavior!

untitledad

WARNING

SERENTIL® (MESORIDAZINE BESYLATE) HAS BEEN SHOWN TO PROLONG THE QTc INTERVAL IN A DOSE RELATED MANNER, AND DRUGS WITH THIS POTENTIAL, INCLUDING SERENTIL, HAVE BEEN ASSOCIATED WITH TORSADE DE POINTES-TYPE ARRHYTHMIAS AND SUDDEN DEATH. DUE TO ITS POTENTIAL FOR SIGNIFICANT, POSSIBLY LIFE-THREATENING, PROARRHYTHMIC EFFECTS, SERENTIL SHOULD BE RESERVED FOR USE IN THE TREATMENT OF SCHIZOPHRENIC PATIENTS WHO FAIL TO SHOW AN ACCEPTABLE RESPONSE TO ADEQUATE COURSES OF TREATMENT WITH OTHER ANTIPSYCHOTIC DRUGS, EITHER BECAUSE OF INSUFFICIENT EFFECTIVENESS OR THE INABILITY TO ACHIEVE AN EFFECTIVE DOSE DUE TO INTOLERABLE ADVERSE EFFECTS FROM THOSE DRUGS. (SEE WARNINGS, CONTRAINDICATIONS, AND INDICATIONS.)

U.S. needs 30,000 more child psychiatrists – to prescribe drugs

Teachers are already being forced to provide services well beyond their education and experience.

Teachers are already being forced to provide services well beyond their education and experience.

How to utilize the American Education system to funnel children into psychiatric care: psychiatrists have become “drug dealers” for the Big Pharma Cartel – it makes perfect business sense to turn schools into “profit centers” for the drug industry.

US Mental Health Services Ranked by State Nancy A. Melville Medscape. Jan 13, 2015.

Medscape is a site catering to doctors. The study summarized on Medscape was funded by pharmaceutical manufacturers:The researchers received support from Eli Lilly and Company, Genentech, Otsuka America Pharmaceutical Inc, Sunovion Pharmaceuticals Inc, Takeda Pharmaceuticals USA, Inc, and Lundbeck US.”

Link to full text at end of excerpt.

Psychiatrist Shortage

A glimpse of figures from Massachusetts, a state that already had a health system similar to that established by the ACA (Obamacare) suggests that even with widespread insurance coverage, access to mental health treatment can still be a challenge.

According to the report, although only 1% of adults with any mental illness were uninsured in Massachusetts, an estimated 20.4% reported having an unmet need.

Among key barriers to mental health care across the country has been a significant shortage of psychiatrists. According to the report, there is only one mental health care provider for every 790 individuals.

And an even greater need is anticipated under the ACA, said Renee Binder, MD, president-elect of the American Psychiatric Association (APA) and a professor in the Department of Psychiatry at the University of California, San Francisco.

“The country will need an additional 30,000 child psychiatrists to meet the needs generated by the expanded coverage under the ACA, and we currently only have 8000, so we need to triple the number of child psychiatrists,” she told Medscape Medical News.

The APA has been proactive in addressing the shortage through several avenues, including pushing for integrative programs to extend the reach of psychiatrists throughout the healthcare system and the community, she noted.

“The APA has been promoting new systems of collaborative care, with psychiatrists working together with other specialties, including primary care providers and pediatricians,” she said.

Aiming for Early Intervention

Efforts have also included expanding the educational system, with a program offered through the America Psychiatric Foundation called Typical or Trouble, designed to help teachers identify behaviors that warrant intervention.

“Early intervention is very important in addressing mental health issues,” Dr Binder said.

“Studies have shown that identifying people with potential problems early on can be very essential in improving outcomes, and since teachers can be the best ones to identify problems early, it makes sense to educate them on what to look for,” she added.

Gionfriddo (researcher) echoed the sentiment regarding the need to address mental illness as early as possible. In the report, he noted his own personal experience with a son with schizophrenia and how he could have benefited from the MHA’s emphasis on an early intervention approach dubbed “B4Stage4.”

“We have to stop waiting until mental illnesses reach Stage 4 to treat them,” he writes.

“By Stage 4, problems are so far advanced that even with the best treatments available, recovery is often compromised.” Gionfriddo underscored the fact that half of all mental health concerns manifest by age 14 years.

“[We need to] give children the support they need to stay and succeed in school and young adults the support they need to live and work independently,” he said. “And we need to change our thinking from crisis intervention to support and recovery.”

“The best-ranked states remind us that recovery is not only possible, it’s to be expected when intervention comes early.”

Parity or Disparity: The State of Mental Health in America 2015. Full text

___________________________________________________________________________________________

Once again we see the failure to predict consequences of legislation such as Obamacare: Access to insurance does not improve the quality of mental health care. Evaluation of all children will devolve onto teachers and school systems that already are tasked with social engineering goals that push aside academics and are increasingly overwhelmed as the fallback institutions that are expected to carry out everything but teaching. Nor will teachers, who will likely have little more education than a seminar or two on mental health, in any way be able to take over the work of evaluating so-called childhood “pathologies”.

Undiscovered Photos / Asperger Engineer Dad

That box of family photos that one has gone through again and again? Today I found college photos of my Dad: University of Dayton, Ohio.

 

 

Psychologist says that Neurotypicals are Irrational / This is News?

 

BOOK: “THINKING, FAST AND SLOW”

By Daniel Kahneman, 499 pp. Farrar, Straus & Giroux.

Review clipped a bit for length and blah, blah, blah. 

_______________________________________________________

NEW YORK TIMES Sunday Book Review 

By Jim Holt, NOV. 25, 2011

In 2002, Daniel Kahneman won the Nobel in economic science. What made this unusual is that Kahneman is a psychologist. Specifically, he is one-half of a pair of psychologists who, beginning in the early 1970s, set out to dismantle an entity long dear to economic theorists: that arch-rational decision maker known as Homo economicus. The other half of the dismantling duo, Amos Tversky, died in 1996 at the age of 59.

Human irrationality is Kahneman’s great theme.

There are essentially three phases to his career. In the first, he and Tversky did a series of ingenious experiments that revealed twenty or so “cognitive biases” — unconscious errors of reasoning that distort our (neurotypical) judgment of the world. Typical of these is the “anchoring effect”: our tendency to be influenced by irrelevant numbers that we happen to be exposed to. In the second phase, Kahneman and Tversky showed that people making decisions under uncertain conditions do not behave in the way that economic models have traditionally assumed; they do not “maximize utility.” The two then developed an alternative account of decision making, one more faithful to human psychology, which they called “prospect theory.” In the third phase of his career, mainly after the death of Tversky, Kahneman has delved into “hedonic psychology”: the science of happiness, its nature and its causes.

“Thinking, Fast and Slow” spans all three of these phases. It is an astonishingly rich book: lucid, profound, full of intellectual surprises and self-help value.  So impressive is its vision of flawed human reason that the New York Times columnist David Brooks recently declared that Kahneman and Tversky’s work “will be remembered hundreds of years from now,” and that it is “a crucial pivot point in the way we see ourselves.” They are, Brooks said, “like the Lewis and Clark of the mind.” (Not really; NTs will reject the notion that they aren’t rational, because they truly are irrational.)

A leitmotif of this book is overconfidence. All of us, and especially experts, are prone to an exaggerated sense of how well we understand the world — so Kahneman reminds us. Despite all the cognitive biases, fallacies and illusions that he and Tversky (along with other researchers) purport to have discovered in the last few decades, he fights shy of the bold claim that humans are fundamentally irrational. (NT denial.)

Or does he? “Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time,” Kahneman writes in his introduction. Yet, just a few pages later, he observes that the work he did with Tversky

“challenged” the idea, orthodox among social scientists in the 1970s, that “people are generally rational.”

(They could have saved a lot of time and effort, if they had simply asked an Asperger or two, “Are ‘normal’ humans rational?”)

The two psychologists discovered “systematic errors in the thinking of normal people”: errors arising not from the corrupting effects of emotion, but built into our evolved cognitive machinery. (Neoteny?)

… frowning — activates the skeptic within us: what Kahneman calls “System 2.” Just putting on a frown, experiments show, works to reduce overconfidence; it causes us to be more analytical, more vigilant in our thinking; to question stories that we would otherwise unreflectively accept as true because they are facile and coherent. (A clue to Aspie “non-conforming” facial expressions..?)

System 2, in Kahneman’s scheme, is our slow, deliberate, analytical and consciously effortful mode of reasoning about the world. System 1, by contrast, is our fast, automatic, intuitive and largely unconscious mode. 

Warning: gobbledygook ahead: More generally, System 1 (neurotypicals) uses association and metaphor to produce a quick and dirty draft of reality, which System 2 draws on to arrive at explicit beliefs and reasoned choices. (This assumes that “normal” people have an active “system 2”) So System 2 would seem to be the boss, right? (Hierarchical thinking again!) But System 2, in addition to being more deliberate and rational, is also lazy. And it tires easily. (The vogue term for this is “ego depletion.”) Too often, instead of slowing things down and analyzing them, System 2 is content to accept the easy but unreliable story about the world that System 1 feeds to it. (Which might indicate that rational analyses is a skill that is simply missing or under-developed in NTs)

“Although System 2 believes itself to be where the action is,” Kahneman writes, “the automatic System 1 is the hero of this book.” System 2 is especially quiescent, it seems, when your mood is a happy one. (So make people happy before you lie to them.)

At this point, the skeptical reader might wonder how seriously to take all this talk of System 1 and System 2. Are they actually a pair of little agents in our head, each with its distinctive personality? Not really, says Kahneman. Rather, they are “useful fictions” — useful (for NTs) because they help explain the quirks of the human mind.

To see how, consider what Kahneman calls the “best-known and most controversial” of the experiments he and Tversky did together: “the Linda problem.” Participants in the experiment were told about an imaginary young woman named Linda, who is single, outspoken and very bright, and who, as a student, was deeply concerned with issues of discrimination and social justice. The participants were then asked which was more probable: (1) Linda is a bank teller. Or (2) Linda is a bank teller and is active in the feminist movement. The overwhelming response was that (2) was more probable; in other words, that given the background information furnished, “feminist bank teller” was more likely than “bank teller.” This is, of course, a blatant violation of the laws of probability. (Every feminist bank teller is a bank teller; adding a detail can only lower the probability.) Yet even among students in Stanford’s Graduate School of Business, who had extensive training in probability, 85 percent flunked the Linda problem. One student, informed that she had committed an elementary logical blunder, responded, “I thought you just asked for my opinion.”

Kahneman describes dozens of such experimentally demonstrated breakdowns in rationality — “base-rate neglect,” “availability cascade,” “the illusion of validity” and so on. The cumulative effect is to make the reader despair for human reason.

(Blah, blah, blah excuses inserted here, in order to prop up the irrational belief that NTs are rational.) Are we really so hopeless? (An Asperger says YES!) Think again of the Linda problem. Even the great evolutionary biologist Stephen Jay Gould was troubled by it. As an expert in probability he knew the right answer, yet he wrote that “a little homunculus in my head continues to jump up and down, shouting at me — ‘But she can’t just be a bank teller; read the description.’ ” It was Gould’s System 1, Kahneman assures us, that kept shouting the wrong answer at him. But perhaps something more subtle is going on. Our everyday conversation takes place against a rich background of unstated expectations (ie, social nonsense) — what linguists call “implicatures.” Such implicatures can seep into psychological experiments. Given the expectations that facilitate our conversation, it may have been quite reasonable for the participants in the experiment to take “Linda is a bank clerk” to imply that she was not in addition a feminist. If so, their answers weren’t really fallacious.

Note: It is possible that NTs, when in hyper-social environments, loose whatever “rational” processes they have; in “private” they may be more reason-based.

(Blah, blah, blah excuses inserted here, in order to prop up the irrational belief that NTs are rational.) This might seem a minor point. But it applies to several of the biases that Kahneman and Tversky, along with other investigators, purport to have discovered in formal experiments. In more natural settings — when we are detecting cheaters rather than solving logic puzzles; when we are reasoning about things rather than symbols; when we are assessing raw numbers rather than percentages — people are far less likely to make the same errors. So, at least, much subsequent research suggests. Maybe we are not so irrational after all.

Some cognitive biases, of course, are flagrantly exhibited even in the most natural of settings. Take what Kahneman calls the “planning fallacy”: our tendency to overestimate benefits and underestimate costs, and hence foolishly to take on risky projects. In 2002, Americans remodeling their kitchens, for example, expected the job to cost $18,658 on average, but they ended up paying $38,769.

The planning fallacy is “only one of the manifestations of a pervasive optimistic bias,” Kahneman writes, which “may well be the most significant of the cognitive biases.” Now, in one sense, a bias toward optimism is obviously bad, since it generates false beliefs — like the belief that we are in control, and not the playthings of luck. But without this “illusion of control,” would we even be able to get out of bed in the morning? Optimists are more psychologically resilient, have stronger immune systems, and live longer on average than their more reality-based counterparts. (not a scientifically provable claim) Moreover, as Kahneman notes, exaggerated optimism serves to protect both individuals and organizations from the paralyzing effects of another bias, “loss aversion”: our tendency to fear losses more than we value gains. (How social; how irrational!!!)

“OPTIMISTS are COWARDS” Oswald Spengler

Even if we could rid ourselves of the biases and illusions identified in this book — and Kahneman, citing his own lack of progress in overcoming them, doubts that we can — it is by no means clear that this would make our lives go better. And that raises a fundamental question:

What is the point of rationality? (Only an irrational neurotypical would ask this question!) 

We are, after all, Darwinian survivors. (Really? Blah, blah, blah.) Our everyday reasoning abilities have evolved to cope efficiently with a complex and dynamic environment. They are thus likely to be adaptive in this environment, even if they can be tripped up in the psychologist’s somewhat artificial experiments. (Nothing like trashing the field of psychology in order to prop up some faint hope that NTs are rational!)Where do the norms of rationality come from, if they are not an idealization of the way humans actually reason in their ordinary lives? As a species, we can no more be pervasively biased in our judgments than we can be pervasively ungrammatical in our use of language — or so critics of research like Kahneman and Tversky’s contend. (Aye, yai, yai!) 

Note: Rationalizing should never be mistaken for “being rational.” 

Kahneman never grapples philosophically with the nature of rationality. He does, however, supply a fascinating account of what might be taken to be its goal: happiness. What does it mean to be happy? When Kahneman first took up this question, in the mid 1990s, most happiness research relied on asking people how satisfied they were with their life on the whole. But such retrospective assessments depend on memory, which is notoriously unreliable. What if, instead, a person’s actual experience of pleasure or pain could be sampled from moment to moment, and then summed up over time? Kahneman calls this “experienced” well-being, as opposed to the “remembered” well-being that researchers had relied upon. And he found that these two measures of happiness diverge in surprising ways. What makes the “experiencing self” happy is not the same as what makes the “remembering self” happy. In particular, the remembering self does not care about duration — how long a pleasant or unpleasant experience lasts. Rather, it retrospectively rates an experience by the peak level of pain or pleasure in the course of the experience, and by the way the experience ends.

(Colonoscopy “experiment” inserted here.) Two groups of patients were to undergo painful colonoscopies. The patients in Group A got the normal procedure. So did the patients in Group B, except — without their being told — a few extra minutes of mild discomfort were added after the end of the examination. Which group suffered more? Well, Group B endured all the pain that Group A did, and then some. But since the prolonging of Group B’s colonoscopies meant that the procedure ended less painfully, the patients in this group retrospectively minded it less. (In an earlier research paper though not in this book, Kahneman suggested that the extra discomfort Group B was subjected to in the experiment might be ethically justified if it increased their willingness to come back for a follow-up!)

As with colonoscopies, so too with life. (Yikes!) It is the remembering self that calls the shots, not the experiencing self. Kahneman cites research showing, for example, that a college student’s decision whether or not to repeat a spring-break vacation is determined by the peak-end rule applied to the previous vacation, not by how fun (or miserable) it actually was moment by moment. The remembering self exercises a sort of “tyranny” over the voiceless experiencing self. “Odd as it may seem,” Kahneman writes, “I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.”

(The opposite may be true of Asperger people. Experience “now” is my default mode. See posts related to time.)

Clearly, much remains to be done in hedonic psychology. But Kahneman’s conceptual innovations have laid the foundation for many of the empirical (anecdotal) findings he reports in this book: that while French mothers spend less time with their children than American mothers, they enjoy it more; that headaches are hedonically harder on the poor; that women who live alone seem to enjoy the same level of well-being as women who live with a mate; and that a household income of about $75,000 in high-cost areas of the country is sufficient to maximize happiness. Policy makers interested in lowering the misery index of society will find much to ponder here. (As if policy makers with this intent actually exist!)

(Social blah, blah, blah here)

And just to make sure that all this blah, blah, blah is perfectly understandable to Neurotypicals, convert it into a Black / White system of polar opposites that is totally irrational. Be sure to pile on irrelevant anecdotal fragments with no correlation to each other, or to the concept.

 

 

 

50 Inaccurate PSYCH Words Exposed / Frontiers in Psychology

AT LAST! Honest talk about sloppy, self-serving and misleading claims.  Of particular significance to anyone diagnosed with Autism or Asperger’s.

_______________________________________________________________________________________

Frontiers in Psychology / 03 August 2015

The rest of the 50 terms can be read here: http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01100/full/

Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases

  • 1Department of Psychology, Emory University, Atlanta, GA, USA
  • 2Department of Psychology, Georgia State University, Atlanta, GA, USA
  • 3Binghamton University – State University of New York, Binghamton, NY, USA
  • 4Department of Psychology, Sacred Heart College, Fairfield, CT, USA

“The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion. To this end, we present a provisional list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats. We provide corrective information for students, instructors, and researchers regarding these terms, which we organize for expository purposes into five categories: inaccurate or misleading terms, frequently misused terms, ambiguous terms, oxymorons, and pleonasms. For each term, we (a) explain why it is problematic, (b) delineate one or more examples of its misuse, and (c) when pertinent, offer recommendations for preferable terms. By being more judicious in their use of terminology, psychologists and psychiatrists can foster clearer thinking in their students and the field at large regarding mental phenomena.”

“If names be not correct, language is not in accordance with the truth of things.” (Confucius, The Analects)

(3) Autism epidemic. Enormous effort has been expended to uncover the sources of the “autism epidemic” (e.g., King, 2011), the supposed massive increase in the incidence and prevalence of autism, now termed autism spectrum disorder, over the past 25 years. The causal factors posited to be implicated in this “epidemic” have included vaccines, television viewing, dietary allergies, antibiotics, and viruses.

Nevertheless, there is meager evidence that this purported epidemic reflects a genuine increase in the rates of autism per se as opposed to an increase in autism diagnoses stemming from several biases and artifacts, including heightened societal awareness of the features of autism (“detection bias”), growing incentives for school districts to report autism diagnoses, and a lowering of the diagnostic thresholds for autism across successive editions of the Diagnostic and Statistical Manual of Mental Disorders (Gernsbacher et al., 2005; Lilienfeld and Arkowitz, 2007). Indeed, data indicate when the diagnostic criteria for autism were held constant, the rates of this disorder remained essentially constant between 1990 and 2010 (Baxter et al., 2015). If the rates of autism are increasing, the increase would appear to be slight at best, hardly justifying the widespread claim of an “epidemic.”

(4) Brain region X lights up. Many authors in the popular and academic literatures use such phrases as “brain area X lit up following manipulation Y” (e.g., Morin, 2011). This phrase is unfortunate for several reasons. First, the bright red and orange colors seen on functional brain imaging scans are superimposed by researchers to reflect regions of higher brain activation.

They are NOT a product of the scan, but ADDED as graphic emphasis. (ie, technically “fake”)

Nevertheless, they may engender a perception of “illumination” in viewers. Second, the activations represented by these colors do not reflect neural activity per se; they reflect oxygen uptake by neurons and are at best indirect proxies of brain activity. Even then, this linkage may sometimes be unclear or perhaps absent (Ekstrom, 2010). Third, in almost all cases, the activations observed on brain scans are the products of subtraction of one experimental condition from another. Hence, they typically do not reflect the raw levels of neural activation in response to an experimental manipulation. For this reason, referring to a brain region that displays little or no activation in response to an experimental manipulation as a “dead zone” (e.g., Lamont, 2008) is similarly misleading. Fourth, depending on the neurotransmitters released and the brain areas in which they are released, the regions that are “activated” in a brain scan may actually be being inhibited rather than excited (Satel and Lilienfeld, 2013). Hence, from a functional perspective, these areas may be being “lit down” rather than “lit up.”

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public (France et al., 2007; Deacon and Baird, 2009). This phrase even crops up in some academic sources; for example, one author wrote that one overarching framework for conceptualizing mental illness is a “biophysical model that posits a chemical imbalance” (Wheeler, 2011, p. 151). Nevertheless, the evidence for the chemical imbalance model is at best slim (Lacasse and Leo, 2005; Leo and Lacasse, 2008). One prominent psychiatrist even dubbed it an urban legend (Pies, 2011). There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels. Moreover, although serotonin reuptake inhibitors, such as fluoxetine (Prozac) and sertraline (Zoloft), appear to alleviate the symptoms of severe depression, there is evidence that at least one serotonin reuptake enhancer, namely tianepine (Stablon), is also efficacious for depression (Akiki, 2014). The fact that two efficacious classes of medications exert opposing effects on serotonin levels raises questions concerning a simplistic chemical imbalance model.

(23) Psychiatric control group. NOT a true control group! This phrase and similar phrases (e.g., “normal control group,” “psychopathological control group”) connote erroneously that (a) groups of ostensibly normal individuals or mixed psychiatric patients who are being compared with (b) groups of individuals with a disorder of interest (e.g., schizophrenia, major depression) are true “control” groups. They are not. They are “comparison groups” and should be referred to accordingly. The phrase “control group” in this context may leave readers with the unwarranted impression that the design of the study is experimental when it is actually quasi-experimental. Just as important, this term may imply that the only difference between the two groups (e.g., a group of patients with anxiety disorder and a group of ostensibly normal individuals) is the presence or absence of the disorder of interest. In fact, these two groups almost surely differ on any number of “nuisance” variables, such as personality traits, co-occurring disorders, and family background, rendering the interpretation of most group differences open to multiple interpretations (Meehl, 1969).

Emergence of “humans” / Berkeley.edu + Comments

slidec8

Simplified socio-cultural guide to identifying male / female.

 

The evolution of Primates – Gender dimorphism /

Top: Orangutan male and female. Middle: Modern social human; all “cases” of allowable bathroom use. Bottom: Idiot’s guide to gender ID; U.S.

 

Low sexual dimorphism in modern social humans? Really? Sexual dimorphism is created culturally in humans, and wow! Gender assignment is all mixed up! In fact, one might observe, that body alteration, decoration, behavior and costume are how Homo sapiens compensates for being a strange hairless ape, born without the elaborate fur, plumage, texture, color and behavioral displays of other species. We “copy” other animals and utilize materials in the environment to socially broadcast our sex and gender  – from the violent hyper male to the “big boob” sex object that is the “ideal” American woman. Some cultures  disguise or blur a person’s sex / gender. Neoteny promotes childlike appearance in males and females – the current trend is toward androgeny.

Any questions about this guy’s gender? 

papua13

Old school “gun”

50%20cent

Below: Modern neotenic “feminized” male – androgeny is the popular goal.

jaejoong-jyj korean

__________________________________________________________________________________________

How bizarre can the “story” of human evolution get?

The following chapter “The Emergence of Humans” is from Berkeley.edu, a site about evolution for students. I confess that to my Asperger type of thinking, this review of evolutionary studies is excruciating: One (dumb) point of view is especially mind-boggling; that chimpanzees are a legitimate focus of “study and research” into ancestral humans and modern human behavior, merely because “they are alive” and eligible for torture in labs’; they don’t have “souls” or “suffer.” And they appeal to neotenic social humans, by scoring high on the “cute” scale.

The apparent inability of researchers to get past this 19th C. world view is stunning; instead of a thorough examination of assumptions across disciplines, we again see “warfare” between disciplines, and the ongoing attempt to assemble a human “dinosaur” from bits and pieces of fossilized thinking. In fact, paleontology has exploded with new ideas since “old” dinosaur reconstructions were discovered to be highly inaccurate. Hint, hint.

FOUND! The last common ancestor of Humans and Chimps.

imagesZYC0W6GI

Berkeley.edu / The emergence of humans

The narratives of human evolution are oft-told and highly contentious. There are major disagreements in the field about whether human evolution is more like a branching tree or a crooked stick, depending partly on how many species one recognizes. Interpretations of almost every new find will be sure to find opposition among other experts. Disputes often center on diet and habitat, and whether a given animal could occasionally walk bipedally or was fully upright. What can we really tell about human evolution from our current understanding of the phylogenetic relations of hominids and the sequence of evolution of their traits?

Hominid evogram

(consistency problem)

To begin with, let’s take a step back. Although the evolution of hominid features is sometimes put in the framework of “apes vs. humans,” the fact is that humans are apes, just as they are primates and mammals. A glance at the evogram shows why. The other apes — chimp, bonobo, gorilla, orangutan, gibbon — would not form a natural, monophyletic group (i.e., a group that includes all the descendants of a common ancestor) — if humans were excluded. Humans share many traits with other apes, and those other “apes” (i.e., non-human apes) don’t have unique features that set them apart from humans. Humans have some features that are uniquely our own, but so do gorillas, chimps, and the rest. Hominid evolution should not be read as a march to human-ness (even if it often appears that way from narratives of human evolution). Students should be aware that there is not a dichotomy between humans and apes. Humans are a kind of ape.

Virtually all systematists and taxonomists agree that we should only give names to monophyletic groups. However, this evogram shows that this guideline is not always followed. For an example, consider Australopithecus. On the evogram you can see a series of forms, from just after Ardipithecus to just before Homo in the branching order, that are all called Australopithecus. (Even Paranthropus is often considered an australopithecine.) But as these taxa appear on the evogram, “Australopithecus” is not a natural group, because it is not monophyletic: some forms, such as A. africanus, are found to be closer to humans than A. afarensis and others. Beyond afarensis, for example, all other Australopithecus and Homo share “enlarged cheek teeth and jaws,” because they have a more recent common ancestor. Eventually, several of these forms will have to have new genus names if we want to name only monophyletic groups. Students should avoid thinking of “australopithecines” as a natural group with uniquely evolved traits that link its members together and set it apart from Homo. Instead they should focus on the pattern of shared traits among these species and the Homo clade, recognizing that each species in this lineage gains more and more features that are shared by Homo.

In popular fiction and movies, the concept of the wild “ape-man” is often that of a tree-living, vine-swinging throwback like Tarzan. However, the pantheon of hominids is much richer than this, as the evogram shows with forms as different as Paranthropus and Ardipithecus shows. For example, imagine going back in time to the common ancestor of humans and chimps (including bonobos). What did that common ancestor look like? In the Origin of Species Darwin noted that the extinct common ancestor of two living forms should not be expected to look like a perfect intermediate between them. Rather, it could look more like one branch or the other branch, or something else entirely.

Found! The last common ancestor of humans and chimps.

Did the common ancestor of humans and chimps conform to the ape-man myth and live in the trees, swinging from vines? To answer this, we have to focus not only on anatomy but on behavior, and we have to do it in a phylogenetic context. Apes such as the gibbon and orangutan, which are more distantly related to humans, are largely arboreal (i.e., tree-living). The more closely related apes such as the gorilla and chimps are relatively terrestrial, although they can still climb trees. The feet of the first hominids have a considerable opposition of the big toe to the others but relatively flat feet, as arboreal apes generally do. But other features of their skeleton, such as the position of the foramen magnum underneath the skull, the vertically shortened and laterally flaring hips, and the larger head of the femur, suggest that they were not just mainly terrestrial but habitually bipedal, unlike their knuckle-walking relatives. Most evidence suggests that the hominid lineage retained some of the anatomical features related to arboreal life and quadrupedal gait even after it had evolved a more terrestrial lifestyle and a bipedal gait. There is no fossil record of these behaviors, but the balance of the available evidence supports the hypothesis that the hominid ancestor was terrestrial and bipedal.

Much discussion in human paleontology surrounds the evolution of a bipedal, upright stance. When and why did this occur? One thing to keep in mind is that “bipedal” and “upright” are not equivalent terms. An animal can be bipedal without having a vertical backbone (think T. rex). It seems clear from the fossil record of hominids that habitual bipedality preceded the evolution of a recurved spine and upright stance. Other changes in the gait, such as how the relatively “splayed” gait of chimps evolved into the gait of humans, who put one foot directly in front of the other, involve studying the hip joint, the femur, and the foot. The famous Laetoli footprints attributed to Australopithecus afarensis are bipedal, but they are still relatively splayed compared to the tracks of living humans. (WOW! they are doing it again despite their own caution: humans did not evolve from chimpanzees!)

Another extremely interesting feature in hominid evolution is the degree of sexual dimorphism (i.e., physical differences between the sexes) in different species. Sexual dimorphism is linked to features of sociality and mate competition in many sorts of animals. To understand the evolution of this feature in humans, which have relatively low sexual dimorphism, we need to consider the other apes, in which sexual dimorphism tends to be moderate to high (with exceptions). 

(Again, culture is utterly ignored: the fact is; women and men “self-morph” according to socio-cultural “genders” into very dimorphic animals)

We don’t have sufficient evidence about Sahelanthropus, Orrorin, and Ardipithecus to understand much about sex differences in these species, but we do know that A. afarensis had relatively high sexual dimorphism: the males were considerably larger than the females. The difference seems to have been less in A. africanus, Paranthropus, and most of the Homo lineage. The evolutionary explanation for A. afarensis‘ dimorphism is not entirely clear. The larger males may have used their size to attract females and/or repel rivals, which would fit with an explanation based on sexual selection. Or the males and females may have been differently sized because they played different roles in their groups, the males hunting and gathering and the females caring for the young. Darwin thought that this differentiation of the sexes may have played a critical role in human evolution, but we simply do not know much about the role of this feature in A. afarensis. Some, all, or none of these functions may have been in play. (Novel-writing again! If we don’t have facts about a subject, why not say so? Speculation becomes dogma in the “magic word syndrome” social mind and people argue over imaginary histories and qualities.  Also – I suspect that once again the writers have “EuroAmerican humans in mind regarding sexual dimorphism: why?

We do know that by the time the animals known as Homo evolved, they could make tools, and their hands were well suited for complex manipulations. These features were eventually accompanied by the reduction of the lower face, particularly the jaws and teeth, the recession of the brow, the enlargement of the brain, the evolution of a more erect posture, and the evolution of a limb more adapted for extended walking and running (along with the loss of arboreally oriented features). The evogram shows the hypothesized order of acquisition of these traits. Yet each of the Homo species was unique in its own way, so human evolution should not be seen as a simple linear progression of improvement toward our own present-day form. (But, we show it that way, anyway!)

More…. Should you need a mind-boggling experience:

https://en.wikibooks.org/wiki/Survey_of_Communication_Study/Chapter_13_-_Gender_Communication

And to clarify all this: 

PTSD in Elephants / PLEASE, PLEASE Listen to every word

From Kerulos website. See more details in Elephant Breakdown / Nature vol. 433/24Feb. 2005

Trans Species Psychology

In 2005, Kerulos’ Director Gay Bradshaw diagnosed Post-Traumatic Stress Disorder (PTSD) in free-living elephants. This science has catalyzed an entirely new approach to elephant conservation and welfare.

Historically, elephants in India and other parts of Asia roamed across the continent. Today, there is intense conflict between humans and elephants. Elephants in close confinement captivity live in chronic stress, deprivation, and pain even when direct physical punishment is not employed. While culturally engrained images of performing animals and zoo exhibits may evoke nostalgia and fascination for humans, the experience of animals in captivity is far different. The measure of elephant suffering can perhaps be best appreciated when we take into account the radical differences between captivity and the wild habitats to which they are ecologically, psychologically and evolutionarily adapted.

When release from abuse does occur, the road to recovery is not easy. Elephants coming to sanctuary experience tremendous improvements, yet they still carry the scars and burden of their past experience. Similar to human prisoners who survive, elephants from circuses and zoos are diagnosed with Complex PTSD (Post-traumatic Stress Disorder) and other trauma-induced conditions.

Sadly, free-living elephants are no longer immune from the ravages of trauma. Poaching, culls, and the stress of life in shrinking habitat have torn apart elephant society. Orphaned infants suffer physiological and emotional shock when they lose their mothers and families and elephants everywhere are under siege from human pressures. Elephants and their culture are threatened with collapse.

Elephants, Us, and Other Kin. Presented by G.A. Bradshaw at the UCLA Annual Interpersonal Neurobiology Conference 2014. See video below.

 

Modern social humans won’t stop until every last living thing on earth is tortured, made insane, or is driven to extinction.

Dear Asperger; Do you see yourself in this tragedy?