PhD Dissertation / Asperger Syndrome Social Narratives

Dissertation for Dr. of Philosophy, Bowling Green State University, 2010 Neil Shepard


imagesQTYDAM51 images94L4E8HC

From Introduction: This dissertation explores representations of Asperger’s syndrome, an autism spectrum disorder. Specifically, it textually analyzes cultural representations with the goal of identifying specific narratives that have become dominant in the public sphere. Beginning in 2001, with Wired magazine’s article by Steve Silberman entitled “The Geek Syndrome” as the starting point, this dissertation demonstrates how certain values have been linked to Asperger’s syndrome: namely the association between this disorder and hyper-intelligent, socially awkward personas.

Narratives about Asperger’s have taken to medicalizing not only genius (as figures such as Newton and Einstein receive speculative posthumous diagnoses) but also to medicalizing a particular brand of new economy, information-age genius. The types of individuals often suggested as representative Asperger’s subjects can be stereotyped as the casual term “geek syndrome” suggests: technologically savvy, successful “nerds.” On the surface, increased public awareness of Asperger’s syndrome combined with the representation has created positive momentum for acceptance of high functioning autism. In a cultural moment that suggests “geek chic,” Asperger’s syndrome has undergone a critical shift in value that seems unimaginable even 10 years ago.

This shift has worked to undo some of the stigma attached to this specific form of autism. The proto-typical Aspergian persona represented dominantly in the media is often both intelligent and successful. At the same time, these personas are also so often masculine, middle/upper class and white. These representations are problematic in the way that they uphold traditional normativity in terms of gender, race and class, as well as reifying stigma toward other points on the autistic spectrum.

Having grown up with a family connection to Asperger’s syndrome, I can say that from my experience the truly challenging difficulties that emerge do so from encounters with the social world. I have never met a person with autism who is, in and of themselves, a “problem.” Problems come in the form of ignorance; the forms of this ignorance vary in range from inadequate educational resources to bullies. The sentiment that the problem is social rather than individual is something that I have seen echoed repeatedly throughout my research, whenever I have read of or spoken with people with autism, their parents, guardians, children, siblings and friends. Whatever Asperger’s or autism may be has, in my experience, been less important thanthe beliefs and practices that comprise it. The work of cultural studies, as I see it, is to interrogate those beliefs and practices.

To talk about a condition such as autism as being socially constructed isn’t to deny the reality of the condition, but rather to call attention to those beliefs and practices that shape the consequences of that reality. Understanding Asperger’s syndrome as a social construction is not to deny the clear realities of a condition that is manifested in the body, but to recognize the accountability of culture’s role in that reality. A social model approach to autism means an acute awareness of those impairments and those disabling features that are a result of the surrounding culture.

Citation: Shepard, Neil, “Rewiring Difference and Disability: Narratives of Asperger’s Syndrome in the Twenty-First Century” (2010). American Culture Studies Ph.D. Dissertations. Paper 40.

ASD / AS Intelligence Revisited / Guess what? We’re intelligent. DUH!

PLoS One. 2011; 6(9): e25372.
Published online 2011 Sep 28. doi:  10.1371/journal.pone.0025372
PMID: 21991394

The Level and Nature of Autistic Intelligence II: What about Asperger Syndrome?

Isabelle Soulières, 1 , 2 , * Michelle Dawson, 1 Morton Ann Gernsbacher, 3 and Laurent Mottron  / Efthimios M. C. Skoulakis, Editor


Individuals on the autistic spectrum are currently identified according to overt atypicalities in socio-communicative interactions, focused interests and repetitive behaviors [1]. More fundamentally, individuals on the autistic spectrum are characterized by atypical information processing across domains (social, non-social, language) and modalities (auditory, visual), raising the question of how best to assess and understand these individuals’ intellectual abilities. Early descriptions [2], [3] and quantifications (e.g. [4]) of their intelligence emphasized the distinctive unevenness of their abilities. While their unusual profile of performance on popular intelligence test batteries remains a durable empirical finding [5], it is eclipsed by a wide range of speculative deficit-based interpretations. (based on socio-cultural arrogance) Findings of strong performance on specific tests have been regarded as aberrant islets of ability arising from an array of speculated deficits (e.g., “weak central coherence”; [6]) and as incompatible with genuine human intelligence.

For example, Hobson ([7], p. 211) concluded that regardless of strong measured abilities in some areas, autistics lack “both the grounding and the mental flexibility for intelligent thought.

Thus, there is a long-standing assumption that a vast majority of autistic individuals are intellectually impaired. In recent years, this assumption has been challenged by investigations that exploit two divergent approaches —represented by Wechsler scales of intelligence and Raven’s Progressive Matrices— to measuring human intelligence [8]. Wechsler scales estimate IQ through batteries of ten or more different subtests, each of which involves different specific oral instructions and tests different specific skills. The subtests are chosen to produce scores that, for the typical population, are correlated and combine to reflect a general underlying ability. Advantages of this approach include the availability of subtest profiles of specific skill strengths and weaknesses, index scores combining related subtests, and dichotomized Performance versus Verbal IQ scores (PIQ vs. VIQ), as well as a Full-Scale IQ (FSIQ) score. However, the range of specific skills assayed by Wechsler scales is limited (e.g., reading abilities are not included), and atypical individuals who lack specific skills (e.g., typical speech processing or speech production) or experiences (e.g., typical range of interests) may produce scores that do not reflect those individuals’ general intelligence.

In contrast, Raven’s Progressive Matrices (RPM) is a single self-paced test that minimizes spoken instruction and obviates speech production or typicality of experiences [9]. The format is a matrix of geometric designs in which the final missing piece must be selected from among an array of displayed choices. Sixty items are divided into five sets that increase progressively in difficulty and complexity, from simple figural to complex analytic items. RPM is regarded both as the most complex and general single test of intelligence [10], [11] and as the best marker for fluid intelligence, which in turn encompasses reasoning and novel problem-solving abilities [8], [12]. RPM tests flexible co-ordination of attentional control, working memory, rule inference and integration, high-level abstraction, and goal-hierarchy management [13], . These abilities, as well as fluid intelligence itself, have been proposed as areas of deficit in autistic persons, particularly when demands increase in complexity [16], [17], [18], [19].

Against these assumptions, we reported that autistic children and adults, with Wechsler FSIQ ranging from 40 to 125, score an average 30 percentile points higher on RPM than on Wechsler scales, while typical individuals do not display this discrepancy, as shown in Figure 1 [20]. RPM item difficulty, as reflected in per-item error rate, was highly correlated between the autistic and non-autistic children (r = .96). An RPM advantage for autistic individuals has been reported in diverse samples. Bolte et al. [21] tested autistic, other atypical (non-autism diagnoses), and typical participants who varied widely in their age and the version of Wechsler and RPM they were administered; autistics with Wechsler FSIQ under 85 were unique in having a relative advantage on RPM. Charman et al. [22] reported significantly higher RPM than Wechsler scores (FSIQ and PIQ) for a large population-based sample of school-aged autistic spectrum children. In Morsanyi and Holyoak [23], autistic children, who were matched with non-autistic controls on two Wechsler subtests (Block Design and Vocabulary), displayed a numeric, though not significant, advantage within the first set of Raven’s Advanced Progressive Matrices items.

The nature of autistic intelligence was also investigated in an fMRI study [24]. Autistics and non-autistics matched on Wechsler FSIQ were equally accurate in solving the 60 RPM items presented in random order, but autistics performed dramatically faster than their controls. This advantage, which was not found in a simple perceptual control task, ranged from 23% for easier RPM items to 42% for complex analytic RPM items.

Autistics’ RPM task performance was associated with greater recruitment of extrastriate areas and lesser recruitment of lateral prefrontal and medial posterior parietal cortex, illustrating their hallmark enhanced perception [25].

One replicated manifestation of autistics’ enhanced perception is superior performance on the Wechsler Block Design subtest, suggesting a visuospatial peak of ability [26]. Even when autistics’ scores on all other Wechsler subtests fall below their RPM scores, their Block Design and RPM scores lie at an equivalent level [20].

Thus, enhanced occipital activity, superior behavioral performance on RPM, and visuospatial peaks co-occur in individuals whose specific diagnosis is autism, suggesting an increased and more autonomous role of perception in autistic reasoning and intelligence [24].

But what about individuals whose specific diagnosis is Asperger syndrome? In Dawson et al.’s previous investigations of autistics’ RPM performance, Asperger individuals were excluded. Asperger syndrome is a relatively low-prevalence [27] autistic spectrum diagnosis characterized by intelligence scores within the normal range (non-Asperger autistics may have IQs in any range). Two main distinctions between the specific diagnosis of autism and Asperger syndrome are relevant to the question of intelligence in the autistic spectrum. First, while their verbal and nonverbal communication is not necessarily typical across development, Asperger individuals do not, by diagnostic definition, exhibit characteristic autistic delays and anomalies in spoken language. While both autistic and Asperger individuals produce an uneven profile on Wechsler subtests, Asperger individuals’ main strengths, in contrast with those of autistics (see [20]), are usually seen in verbal subtests (count me in)  (as illustrated in Figure 2; see also [28]). Although RPM is often deemed a “nonverbal” test of intelligence, in practice typical individuals often rely on verbal abilities to perform most RPM items. (NOTE: I have commented on this in another post, regarding the pre-test tutoring available to students, during which the “rules of the game” are explained. Is this “cheating” in that “fluid intelligence” and not learned procedures, are supposedly being measured?)  

Second, at a group level, Asperger individuals do not display the autistic visuospatial peak in Wechsler scales; rather, their Block Design subtest performance tends to be unremarkably equivalent to their FSIQ (see Figure 2 and also [32]). The question of whether Asperger individuals display the autistic advantage on RPM over Wechsler is thus accompanied by the possibility that the Asperger subgroup represents an avenue for further investigating the nature of this discrepancy. (I am quite baffled at times by my “native” Asperger experience, which is overwhelmingly visual-sensory, but that verbal language is a “go to tool” for translating that experience into “acceptable” form. Very practical! Why does this “arrangement” seem to occur in Asperger’s?)

Our goal was to investigate whether the autistic advantage on RPM is also characteristic of Asperger syndrome and, further, whether RPM performance reveals a fundamental property of intelligence across the autistic spectrum. If the mechanism underlying autistics’ advantage on RPM is limited to visuospatial peaks or to language difficulties disproportionately hampering Wechsler performance, then the advantage should not be found in Asperger individuals. Indeed, as predicted by Bolte et al. [21], Asperger individuals should perform even better on Wechsler scales than on RPM. If instead the underlying mechanism is more general and versatile, then Asperger individuals should demonstrate at least some advantage on RPM. Preliminary findings have suggested this to be the case. In one recent study, Asperger children (age 6–12) obtained significantly higher raw scores on RPM than did typical children matched on age and Wechsler performance [33].

For all the “poo-bah” and graphs, go to original paper (and related papers):


Asperger individuals differ from autistics in their early speech development, in having Wechsler scores in the normal range, and in being less likely to be characterized by visuospatial peaks. In this study, Asperger individuals presented with some significant advantages, and no disadvantages, on RPM compared to Wechsler FSIQ, PIQ, and VIQ. Asperger adults demonstrated a significant advantage, relative to their controls, in their RPM scores over their Wechsler FSIQ and PIQ scores, while for Asperger children this advantage was found for their PIQ scores. For both Asperger adults and children and strikingly similar to autistics in a previous study [20], their best Wechsler performances were similar in level to, and therefore plausibly representative of, their general intelligence as measured by RPM.

We have proposed that autistics’ cognitive processes function in an atypically independent way, leading to “parallel, non-strategic integration of patterns across multiple levels and scales” [36] and to versatility in cognitive processing [26].

Such “independent thinking” suggests ways in which apparently specific or isolated abilities can co-exist with atypical but flexible, creative, and complex achievements. Across a wide range of tasks, including or perhaps

especially in complex tasks, autistics do not experience to the same extent the typical loss or distortion of information that characterizes non-autistics’ mandatory hierarchies of processing

Therefore, autistics can maintain more veridical representations (e.g. representations closer to the actual information present in the environment) when performing high level, complex tasks. The current results suggest that such a mechanism is also present in Asperger syndrome and therefore represents a commonality across the autistic spectrum. Given the opportunity, different subgroups of autistics may advantageously apply more independent thinking to different available aspects of information: verbal information, by persons whose specific diagnosis is Asperger’s, and perceptual information, by persons whose specific diagnosis is autism.

One could alternatively suggest that the construct measured by RPM is relative and thus would reflect processes other than intelligence in autistic spectrum individuals. However, a very high item difficulty correlation is observed between autistic individuals and typical controls, as well as between Asperger individuals and typical controls. As previously noted [20], these high correlations indicate that RPM is measuring the same construct in autistics and non-autistics, a finding now extended to Asperger syndrome.

Therefore, dismissing these RPM findings as not reflecting genuine human intelligence in autistic and Asperger individuals would have the same effect for non-autistic individuals.

The discrepancies here revealed between alternative measures of intelligence in a subgroup of individuals underline the ambiguous non-monolithic definition of intelligence. Undoubtedly, autistics’ intelligence is atypical and may not be as easily assessed and revealed with standard instruments. But given the essential and unique role that RPM has long held in defining general and fluid intelligence (e.g., [37]),

we again suggest that both the level and nature of autistic intelligence have been underestimated.

Thus, while there has been a long tradition of pursuing speculated autistic deficits, it is important to consider the possibility of strength-based mechanisms as underlying autistics’ atypical but genuine intelligence.

Hunter-gatherers have a special way with smells / Study

=Max Planck Institute for Psycholinguistics

When it comes to naming colors, most people do so with ease. But, for odors, it’s much harder to find the words. One notable exception to this rule is found among the Jahai people, a group of hunter-gatherers living in the Malay Peninsula who can name odors just as easily as colors. A new study by Asifa Majid (Radboud University and MPI for Psycholinguistics) and Nicole Kruspe (Lund University) suggests that the Jahai’s special way with smell is related to their hunting and gathering lifestyle.

“There has been a long-standing consensus that ‘smell is the mute sense, the one without words,’ and decades of research with English-speaking participants seemed to confirm this,” says Asifa Majid of Radboud University and MPI for Psycholinguistics. “But, the Jahai of the Malay Peninsula are much better at naming odors than their English-speaking peers. This, of course, raises the question of where this difference originates.”

Hunter-Gatherers and horticulturalists

To find out whether it was the Jahai who have an unusually keen ability with odors or whether English speakers are simply lacking, Majid and Nicole Kruspe (Lund University, Sweden) examined two related, but previously unstudied, groups of people in the tropical rainforest of the Malay Peninsula: the hunter-gatherer Semaq Beri and the non-hunter-gatherer Semelai. The Semelai are traditionally farmers, combining shifting rice cultivation with the collection of forest products for trade.

The Semaq Beri and Semelai not only live in a similar environment; they also speak closely related languages. The question was: how easily are they able to name odors? “If ease of olfactory naming is related to cultural practices, then we would expect the Semaq Beri to behave like the Jahai and name odors as easily as they do colors, whereas the Semelai should pattern differently,” the researchers wrote in their recently published study in Current Biology. And, that’s exactly what they found.

Testing color- and odor-abilities

Majid and Kruspe tested the color- and odor-naming abilities of 20 Semaq Beri and 21 Semelai people. Sixteen odors were used: orange, leather, cinnamon, peppermint, banana, lemon, licorice, turpentine, garlic, coffee, apple, clove, pineapple, rose, anise, and fish. For the color task, study participants saw 80 standardised color chips, sampling 20 equally spaced hues at four degrees of brightness. Kruspe tested participants in their native language by simply asking, “What smell is this?” or “What color is this?”

The results were clear. The hunter-gatherer Semaq Beri performed on those tests just like the hunter-gatherer Jahai, naming odors and colors with equal ease.The non-hunter-gatherer Semelai, on the other hand, performed like English speakers. For them, odors were difficult to name. The results suggest that the downgrading in importance of smells relative to other sensory inputs is a recent consequence of cultural adaption, the researchers say. “Hunter-gatherers’ olfaction is superior, while settled peoples’ olfactory cognition is diminished,” Majid says.

They say the findings challenge the notion that differences in neuroarchitecture alone underlie differences in olfaction, suggesting instead that cultural variation may play a more prominent role. They also raise a number of interesting questions: “Do hunter-gatherers in other parts of the world also show the same boost to olfactory naming?” Majid asks. “Are other aspects of olfactory cognition also superior in hunter-gatherers,” for example, the ability to differentiate one odor from another? “Finally, how do these cultural differences interact with the biological infrastructure for smell?” She says it will be important to learn whether these groups of people show underlying genetic differences related to the sense of smell.

This study was funded by The Netherlands Organisation for Scientific Research as well as the Swedish Foundation.


Majid, A., & Kruspe, N. (2018). Hunter-gatherer olfaction is special. Current Biology. DOI: 10.1016/j.cub.2017.12.014


Self Awareness / OMG What a Hornet’s Nest

What made me awaken this morning with the question of self awareness dancing in my head? It’s both a personal and social question and quest, and so almost impossible to think about objectively. And like so many “word concepts” there is no agreed-upon definition or meaning to actually talk about, unless it’s among religionists of certain beliefs, philosophical schools of knowledge, or neurologists hunched over their arrays of brain tissue, peering like haruspices over a pile of pink meat.

My own prejudices lean toward two basic underpinnings of self-awareness:

1. It is not a “thing” but an experience.

2. Self awareness (beyond Look! It’s me in the mirror…) is learned, earned, created, achieved.

From a previous post –

Co-consciousness; the product of language : “In Western cultures verbal language is inseparable from the process of creating a conscious human being.

A child is told who it is, where it belongs, and how to behave, day in and day out, from birth throughout childhood. In this way culturally-approved patterns of thought and behavior are implanted, organized and strengthened in the child’s brain. 

Social education means setting tasks that require following directions, and asking children to ‘correctly’ answer with words and behavior, to prove that co-consciousness is in place.

This is one of the great challenges of human development, and children who do not ‘pay attention’ to adult demands, however deftly sugar-coated, are rejected as defective, defiant, and diseased.

Punishment for having early self awareness may be physical or emotional brutality or abandonment and exile from the group.”

Who am I? is a question that most children ask sooner or later – prompted obviously by questions from adults (no child is born thinking about this) such as “What do you want to be when you grow up?” (Not, Who are you now?) The socially acceptable menu is small: “A famous sports star” for boys, ” For girls? “A wonderful mom and career woman who looks 16 years old, forever”.

How boring and unrealistic. How life and joy killing. Adults mustn’t let children in on the truth, which is even worse. We know at this point that a child can look in a mirror and say, “That’s me! I hate my haircut,” but he or she is entirely unaware that someday firing rockets into mud brick houses, thereby blowing human bodies to smithereens, may be their passion. Or she may be a single mom with three kids, totally unprepared for an adequate job. Or perhaps he or she may end up addicted to pills and rage and stuffing paper bags with French fries eight hours a day.

If a child were to utter these reasonably probabilistic goals, he or she would be labeled as disturbed and possibly dangerous. And yet human children grow up to be less than ideal, and many  dreadful outcomes occur, but these are the result of the individual colliding with societal fantasies and promises that are not likely outcomes at all.

The strangest part of this is that we talk about self awareness as a “thing” tucked into a hidden space, deep with us, but it isn’t. It is a running score on a test, that once we are born, starts running: the test questions are life’s demands, both from the environment into which we are born, and the culture of family, school, work and citizenship. The tragedy is that few caregivers bother to find out enough about a child to guide them toward a healthy and happy self-awareness. This requires observing and accepting the child’s native gifts and personality, AND helping them to manage their difficulties. This is not the same as curing them of being different, or inflicting life long scars by abandoning them, or diligent training so that like parrots, they can mimic conformist behavior and speech.

Self awareness comes as we live our lives: self-esteem is connected to that process, not as a “before” thing, but an “after” thing: a result of meeting life as it really is, not as a social fantasy. Self awareness is built from the talents and strengths that we didn’t know  we possessed. It also arises as we see the “world” as its pretentions crumble before us. Being able to see one’s existence cast against the immensity of reality, and yet to feel secure, is the measure of finally giving birth to a “self”. 




I’m satisfied that loving the land is my talent and that this is not a small thing, when there are so many human beings who don’t.

Widespread Bias Large Genetic Studies / Implications for ASD Asperger’s

Pleiotropy: This certainly has implications for the endlessly repeated assertion that heritable genetic pathologies account for symptoms that include everything from “being antisocial” to being interested in subjects that bore neurotypicals” to female ASDs “preferring to wear clothing with lots of pockets”. It is acknowledged that ASD / Asperger’s are a highly ‘heterogeneous’ bunch of individuals; no two are alike. Claims for “discovery” of scads of “autism-linked genes” are highly suspicious to begin with, and now this unsurprising report, in which “causal” links are over- and under- estimated, or MISSED COMPLETELY.  

Source of Potential Bias Widespread in Large Genetic Studies

A new statistical method finds that many genetic variants used to determine trait-disease relationships may have additional effects that GWAS analyses don’t pick up.

By Diana Kwon | May 15, 2018

Genome-wide association studies, which scan thousands of genetic variants to identify links to a specific trait, have recently provided epidemiologists with a rich source of data. By applying Mendelian randomization, a technique that leverages an individual’s unique genetic variation to recreate randomized experiments, researchers have been able to infer the causal effect of specific risk factors on health outcomes, such as the link between elevated blood pressure and heart disease. (And all those supposed “links” between ASD / Autism “genes” and a bizarre selection / collection of “manifestations” in ASD / Asperger behavior, brain function and even in apparel choices)

The Mendelian randomization technique has long operated on the key assumption that horizontal pleiotropy, a phenomenon in which a single gene contributes to a disease through more than one pathway, is not happening. However, a new study published last month (April 23) in Nature Genetics finds that when it comes to potentially causal trait-disease relationships identified from genome-wide association studies (GWAS), pleiotropy is widespread—and may bias findings.

The “no pleiotropy” assumption was reasonable when scientists were examining only a few genes and much more was known about their specific biological functions, says Jack Bowden, a biostatistician at the University of Bristol’s MRC Integrative Epidemiology Unit in the U.K., who was not involved in the study. Nowadays, GWAS, which include many more genetic variants, are often conducted with little understanding about the precise mechanisms through which each gene could act on physiological traits, he adds.

Although researchers have suspected that pleiotropy exists in a large number of Mendelian randomization studies using GWAS datasets, “no one has actually tested how much of a problem this was,” says study coauthor Ron Do, a geneticist at the Icahn School of Medicine Mount Sinai.

To address this question, Do and his colleagues developed the so-called MR-PRESSO technique, an algorithm that identifies pleiotropy in Mendelian randomization analyses by searching for outliers in the relationship between the genetic variants’ effects on the trait of interest, say, blood pressure, and the same polymorphisms’ effects on the health outcome, such as heart disease. Outliers suggest that some genetic variants may not only be acting on the outcome through that particular trait—in other words, that pleiotropy exists. 

The team used this method to test all possible trait-disease combinations generated from 82 publicly available GWAS datasets and found that pleiotropy was present in approximately 48 percent of the 191 statistically significant causal relationships they identified. (Yes, statistics are only as good as the quality of the “thinking” of the people manipulating the process) 

When the researchers compared the Mendelian randomization results before and after correcting for pleiotropy, they discovered that pleiotropy could lead to drastic over- or underestimations of the magnitude of a trait’s influence on a disease. (And ASD / Autism is NOT A DISEASE; it’s a collection of symptoms – which have multiple sources including WESTERN socio-cultural prejudice) Approximately 10 percent of the causal associations they found were significantly distorted, and by as much as 200 percent.

For example, the team identified an outlier variant in one of the significant causal relationships they found using Mendelian randomization—a link between body mass index (BMI) and levels of C-reactive protein, a marker for inflammation and heart disease. Further examination revealed that this variant, found in a gene encoding apolipoprotein E—a protein involved in metabolism—was associated with several traits and diseases, including BMI, C-reactive protein, cholesterol levels, and Alzheimer’s disease. After removing this outlier, the effect of BMI on C-reactive protein dropped by 12 percent, still statistically significant, but obviously to a lesser degree.

“There is growing awareness that there’s widespread pleiotropy in the human genome in general, and I think these findings suggest that there needs to be rigorous analysis and careful interpretation of casual relationships when performing Mendelian randomization,” (One would have thought that this was the conservative baseline in “science-based” research) Do says. “I think what’s going to have the biggest impact is not just saying whether causal relationships exist, but actually showing that the magnitude of the causal relationship can be distorted due to pleiotropy.”

Bowden notes that the presence of pleiotropy does not mean that Mendelian randomization is necessarily a flawed technique. “Many research groups around the world are currently developing novel statistical approaches that can detect and adjust for pleiotropy, enabling you to reliability test whether a [gene] has a causal effect on an outcome,” he tells The Scientist. For example, he and his colleagues at the University of Bristol recently reported another method to identify and correct for pleiotropy in large-scale Mendelian randomization analyses. (Are these “novel statistical approaches” proven to correct a problem that has much to do with the “reductive mindset” of those who place prime value on “any positive results” for their research agenda, above scientific discipline?)

“I hope that this paper will raise people’s attention to the potential problems in the assumptions behind [these studies],” says Wei Pan, a biostatistician at the University of Minnesota who was not involved in this work. “Large genetic datasets give researchers the opportunity to use a method like this to move the field forward, and as long as they use the method carefully, they can reach meaningful conclusions.” (Is this true, or social blah, blah?)

M. Verbanck et al., “Detection of widespread horizontal pleiotropy in causal relationships inferred from Mendelian randomization between complex traits and diseases,” Nature Genet, doi:10.1038/s41588-018-0099-7, 2018.


 A chicken with the frizzle gene
© 2004 Richard Blatchford, Dept. of Animal Science UC Davis. All rights reserved. View Terms of Use


The term pleiotropy is derived from the Greek words pleio, which means “many,” and tropic, which means “affecting.” Genes that affect multiple, apparently unrelated, phenotypes are thus called pleiotropic genes Pleiotropy should not be confused with polygenic traits, in which multiple genes converge to result in a single phenotype.

Examples of Pleiotropy

In some instances of pleiotropy, the influence of the single gene may be direct. For example, if a mouse is born blind due to any number of single-gene traits (Chang et al., 2002), it is not surprising that this mouse would also do poorly in visual learning tasks. In other instances, however, a single gene might be involved in multiple pathways. For instance, consider the amino acid tyrosine. This substance is needed for general protein synthesis, and it is also a precursor for several neurotransmitters (e.g., dopamine, norepinephrine), the hormone thyroxine, and the pigment melanin. Thus, mutations in any one of the genes that affect tyrosine synthesis or metabolism may affect multiple body systems. These and other instances in which a single gene affects multiple systems and therefore has widespread phenotypic effects are referred to as indirect or secondary pleiotropy (Grüneberg, 1938; Hodgkin, 1998).

Other examples of both direct and indirect pleiotropy are described in the sections that follow.
Chickens and the Frizzle Trait

In 1936, researchers Walter Landauer and Elizabeth Upham observed that chickens that expressed the dominant frizzle gene produced feathers that curled outward rather than lying flat against their bodies (Figure 2). However, this was not the only phenotypic effect of this gene — along with producing defective feathers, the frizzle gene caused the fowl to have abnormal body temperatures, higher metabolic and blood flow rates, and greater digestive capacity. Furthermore, chickens who had this allele also laid fewer eggs than their wild-type counterparts, further highlighting the pleiotropic nature of the frizzle gene.

See article for Pigmentation and Deafness in Cats, and Antagonistic Pleiotropy and much much more on genetics….

Human Pleiotropy

As touched upon earlier in this article, there are many examples of pleiotropic genes in humans, some of which are associated with disease. For instance, Marfan syndrome is a disorder in humans in which one gene is responsible for a constellation of symptoms, including thinness, joint hypermobility, limb elongation, lens dislocation, and increased susceptibility to heart disease. Similarly, mutations in the gene that codes for transcription factor TBX5 cause the cardiac and limb defects of Holt-Oram syndrome, while mutation of the gene that codes for DNA damage repair protein NBS1 leads to microcephaly, immunodeficiency, and cancer predisposition in Nijmegen breakage syndrome.

One of the most widely cited examples of pleiotropy in humans is phenylketonuria (PKU). This disorder is caused by a deficiency of the enzyme phenylalanine hydroxylase, which is necessary to convert the essential amino acid phenylalanine to tyrosine. A defect in the single gene that codes for this enzyme therefore results in the multiple phenotypes associated with PKU, including mental retardation, eczema, and pigment defects that make affected individuals lighter skinned (Paul, 2000).

The phenotypic effects that single genes may impose in multiple systems often give us insight into the biological function of specific genes. Pleiotropic genes can also provide us valuable information regarding the evolution of different genes and gene families, as genes are “co-opted” for new purposes beyond what is believed to be their original function (Hodgkin, 1998). Quite simply, pleiotropy reflects the fact that most proteins have multiple roles in distinct cell types; thus, any genetic change that alters gene expression or function can potentially have wide-ranging effects in a variety of tissues.

Somewhat ironic, that large genetic studies REMOVE PLEIOTROPY, a “fact” in human genetics that may provide real progress in finding genetic links to physical conditions that are at present lumped together under a phony  “autistic pathology” that is based in the “social brain” of neutrotypicals – and not in scientific reality.


Autism “Experts” claim that the cause of Autism is a “Lizard Brain”

Mcgill J Med. 2011 Jun; 13(2): 38.

 Think I’m joking? Read this:

Evolutionary approaches to autism- an overview and integration

(This is one of three “main theories” presented in the paper)

Autism as the result of a reptile brain

A different perspective on the evolution of autism is provided by the Polyvagal theory (24). Polyvagal theory postulates that through three stages of phylogeny, mammals, especially primates, including humans, have evolved a functional neural organization that regulates emotions and social behavior. The vagus, i.e., the 10th cranial nerve is a major component of the autonomic nervous system that plays an important role in regulating emotions and social behavior. The three stages of phylogeny reflect the emergence of three distinct parts of the autonomic nervous system, each with a different behavioral function. In the first evolutionary stage, the unmyelinated vagus emerged, which regulates immobilization for death feigning and passive avoidance. (supposedly an autism symptom – you know, when lizards flop over and play dead to fool a predator)

Autistic child playing dead.

These are typical responses to dangerous situations in reptiles, but atypical in mammals, including humans. In the second stage, the sympathic- adrenal system emerged, which is characterized by mobilization as a response to dangerous situations. In the third stage, the myelinated vagus emerged, which is involved in social communication, self-soothing and calming. It is proposed that people with autism minimize the expression of the mammalian response, i.e., social communication. (Autistics are social failures, therefore not “real” mammals) Rather, they rely on the defensive strategies that include both mobilization and immobilization. (That is, we’re reptiles!)

While normally primates and humans have a well-developed ability to shift adaptively between mobilization and social engagement behaviors, individuals with autism lack this ability. The resulting behavioral features lead to adaptive benefits in focusing on objects, while minimizing the potentially dangerous interactions with people. Without a readily accessible social engagement system, the myelinated vagus is unable to efficiently inhibit an autonomic state and is poised for flight and fight behaviors with the functional outcomes of frequently observed emotional outbursts or tantrums. The combination of a nervous system that favors defensive behaviors, and the inability to use social communication with people, places the autistic individual outside the realm of normal social behavior. Thus, due to the inability to engage the myelinated vagus to calm and dampen the defensive system (through social interactions), the nervous system of the autistic individual is in a constant state of hypervigilance or shutdown. These are generally adaptive responses in reptiles, but are severely maladaptive in mammals.

You Don’t Have a Lizard Brain

by Daniel Toker The Brain Scientist

You Don’t Have a Lizard Brain

Despite our best intentions, scientists sometimes make a very basic mistake: we look for what makes humans unique. Certainly, humans are not just unique, but extraordinary. Nothing else in the known universe has produced art, science, technology, or civilization. But, our history of searching for how, precisely, we came to be exceptional has often led to bad science – and to popular acceptance of bad science. 

There are the familiar old examples, such as the insistence that the earth is at the center of the universe or that humans couldn’t possibly have evolved from other animals. But our search for what makes us special leads to popular acceptance of unfounded theories even today, and even among those who are otherwise extraordinarily well-informed. Nowhere is that clearer than in the hugely popular – and entirely wrong – theory called the Triune Brain Hypothesis. 

You may have heard of it as the proposal that we have “lizard brains.”

The triune brain hypothesis, developed by the neuroscientist Paul MacLean between the 1960s and 1990s and widely popularized by the astronomer Carl Sagan, asserts that we have a “lizard brain” under our “mammal brain,” and that our “mammal brain” is itself under our primate/human brain. Under this hypothesis, brain evolution is an additive process: new layers of brain tissue emerge on top of old layers, leading to a tenuous but effective coexistence between the “old brain” and the “new brain.” 

MacLean proposed his (incorrect) theory after he made some curious observations about the effects of cutting out what he called the “reptilian complex” of a monkey’s brain (so named because he thought it looked similar to the tissue that made up most of a reptile’s brain). When MacLean took out this part of a male monkey’s brain, the monkey stopped aggressively gesturing at its own reflection (which it thought was another male monkey). This behavioral change seemed to fit MacLean’s hunch that he had taken out a “reptile”-like part of the monkey’s brain, since he thought that aggressive gesturing is a typical example of “reptilian behavior.”

It’s unclear why cutting out this part of the monkey’s brain made the monkeys stop showing aggressive displays, but this brain area, more commonly called the globus pallidus, is known to be involved in an enormous variety of processes. Also, to my knowledge, MacLean’s original observations have not been replicated. What’s more, MacLean’s claim about the prominence of the globus pallidus in the reptilian brain is false: it forms just one part of reptiles’ brains, exactly as it does in the monkey brain.

Based on these loose observations, MacLean argued that we might have a “lizard” brain inside of our brain. In other words, he thought that we never got rid of the “reptilian” brain we inherited from our reptile ancestors, but instead evolved new brain structures on top of our old reptile brain.

Based on these shaky foundations, together with other loose observations regarding what he considered to be uniquely mammalian behavior, MacLean went on to develop a full-blown theory of human brain evolution. The theory held that inside our brains there is a primitive reptilian complex, which is surrounded by an “old” mammalian structure called the limbic system, which is itself surrounded by a “new” mammalian structure called the neocortex. The neocortex was, MacLean asserted, the crowning jewel of brain evolution – the structure, in other words, which made humans (and perhaps other intelligent mammals) unique 

Over the last few decades, MacLean’s theory has become part of the cultural zeitgeist. Clickbait articles bashing the “basic ‘lizard brain’ psychology” of an opponent political group appear on mainstream news websites. Articles with headlines like “Your Lizard Brain” and  “Don’t Listen to Your Lizard Brain” get featured on Psychology Today, a magazine whose sales have soared to the top 10 in the nation. The triune brain theory has even been featured prominently in a blog article on Scientific American, an award-winning and massively popular science magazine. Except perhaps for the political clickbait, these are all publications that make an honest and serious attempt to get the scientific facts right. And this popularity can’t just be pinned on major media: I’ve seen the triune brain theory pop up in college psychology textbooks (e.g this onethis one, and this one), and a search for #triunebrain on Twitter yields a litany of casual references to the idea that we have a lizard brain.  

But MacLean’s triune brain theory is completely wrong – and neuroscientists have known it’s wrong for decades.

The theory is wrong for a simple reason: our brains aren’t fundamentally different from those of reptiles, or even from those of fish. Every mammal has a neocortex (not just the really intelligent ones), and all vertebrates, including reptiles, birds, amphibians, and fish, have analogues of a cortex.

In fact, the very idea that new brain structures emerge on top of old ones is fundamentally at odds with how evolution usually works: biological structures are typically just modified versions of older structures. For example, the mammalian neocortex isn’t a completely new structure like MacLean thought it was, but instead is a modification of the repitilian cortex. As the evolutionary neuroscientist Terrence Deacon explains: “Adding on is almost certainly not the way the brain has evolved. Instead, the same structures have become modified in different ways in different lineages.” This fact is illustrated quite nicely in this figure:

How brain evolution actually works. New brain areas don’t usually get added on top of old ones, but instead are typically just modified versions of old structures. All vertebrates, from fish to humans, have the same general brain layout. (Image via Northcutt, R.G. (2002), color coding by Arseny Khakhalin).

Notice that the cortex and its analogues (colored here in blue) are found in all vertebrates, and isn’t unique to mammals. What’s more, all the major structures of the mammal brain can also be found in the reptile brain, and even in the fish brain.

So what’s gone wrong here? Why is the triune brain theory widely believed, even among psychologists, while evolutionary neuroscience abandoned the theory decades ago (and never took it very seriously in the first place)?

The problem starts, of course, with MacLean. I think it’s fairly clear that MacLean wanted to find what makes humans (and mammals more broadly) unique. And that desire to identify our uniqueness led him to judge his available evidence poorly. MacLean should have considered alternative hypotheses, such as the possibility that differences between our brains and those of other vertebrates are a matter of degree, rather than kind. And he should have asked whether those alternative hypotheses could explain his evidence as well as his own theory could. This sort of self-questioning is key to doing good science: we need to work especially hard to try to prove ourselves wrong. Fortunately, science is structured such that if we can’t (or won’t) prove ourselves wrong, our colleagues most certainly will. And other scientists did prove MacLean wrong, as detailed thoroughly in Terrence Deacon’s paper on what’s known about mammalian brain evolution.

But the evidence that MacLean’s theory was wrong never seemed to make it out of the small world of evolutionary neuroscience.

And for that, I think that some of the blame lies with one of my heroes, Carl Sagan.

The triune brain theory played a starring role in Carl Sagan’s bestseller and Pulitzer Prize winner, The Dragons of Eden. In The Dragons of Eden, Sagan drew on MacLean’s theory to account for how humans evolved to produce science, art, math, and technology – the features of our mind, in other words, which make us unique. Underneath our thinking neocortex, Sagan wrote, is a sea of primitive mammal emotions and even more primitive reptilian proclivities toward hierarchy and aggression. But, he argued, humans are special because our neocortex is particularly well-developed, and so, unlike other animals, we can reason our way out of our primitive instincts. 

To be fair, Sagan was honest and careful in his writing about the triune brain theory, and peppered his explanations with qualifying and cautious language (e.g. “if this theory is correct…”). He also stressed that the model is “an oversimplification” and that it may be nothing more than “a metaphor of great utility and depth.” But Sagan’s enthusiasm for the theory was clear in both his writing and television programs, which were, as always, beautiful and captivating – and had a huge audience. It should therefore come as no surprise that, partly by way of Sagan’s eloquence and popularity, MacLean’s faulty ideas made their way into the cultural mainstream.

It’s unclear how to undo the damage done, except through honest communication of what’s known. Evolutionary neuroscientists guessed from the start the the triune brain theory probably wasn’t right, and now they know it’s not right. But the word hasn’t gotten around. And that’s where you and I come in. 

For my part as a neuroscientist, all I can do is point out what we do have good evidence for: that new brain structures are typically just modified versions of old brain structures, and that we don’t have a lizard brain inside our mammal brain.

But you have a part to play in this too, since now you also know that our brain is simply a vertebrate brain, just like that of every fish, amphibian, reptile, bird, and mammal. Help make that astounding and beautiful fact part of our cultural zeitgeist.


How to build a robot that feels / Interesting presentation.

Metropolis Robot – The robot featured in Metropolis is stunning, and one of the most memorable aspects of the whole movie. Its precise origins are unclear. We know that the sculptor was Walter Shultz Mittendorf, and that Fritz Lang and his design team had input into its creation, yet the beauty of the robot is unlike any other produced up until that point in time. It was thought that robots, being of mechanical construction and design, would therefore appear mechanical and rigid. We can see this in the robot which featured in Carel Capek’s R.U.R play of 1923, wherin the term ‘robot’ was first used. The origins of the sensual design of the Metropolis robot may lie in the work of Novembergruppe member Rudolf Belling (1886-1972), the leading Expressionist sculptor between 1918-22. (Compare to today’s idiotic “cute” robots)

How to build a robot that feels

J.Kevin O’Regan

Talk given at CogSys 2010 at ETH Zurich on 27/1/2010

Overview. Consciousness is often considered to have a “hard” part and a not-so-hard part. With the help of work in artificial intelligence and more recently in embodied robotics, there is hope that we shall be able solve the not-so-hard part and make artificial agents that understand their environment, communicate with their friends, and most importantly, have a notion of “self” and “others”. But will such agents feel anything? Building the feel into the agent will be the “hard” part. “Feel” apparently = “experience”

I shall explain how action provides a solution. Taking the stance that feel is a way of acting in the world provides a way of accounting for what has been considered the mystery of “qualia”, namely why they are very difficult to describe to others and even to oneself, why they can nevertheless be compared and contrasted, and, most important, why there is something it’s like to experience them: that is, why they have phenomenal “presence”. 

As an application of this approach to the phenomenal aspect of consciousness, I shall show how it explains why colors are the way they are, that is, why they are experienced as colors rather than say sounds or smells, and why for example the color red looks red to us, rather than looking green, say, or feeling like the sound of a bell.

When Arnold Schwartzenegger, playing the role of a very advanced robot in the film Terminator ends up being consumed in a bath of burning oil and fire, he goes on steadfastly till the last, fighting to protect his human friends. As a very intelligent robot, able to communicate and reason, he knows that what’s happening to him is a BAD THING, but he doesn’t FEEL THE PAIN. (I have discussed elsewhere that “social emotion words” are merely words assigned to pain (with pleasure being the absence or release of pain using positive emotion words). Parsing pain into myriad categories by which reward and punishment can be applied, is necessary to social control.)

This is the classic view of robots today: people believe that robots could be very sophisticated, able to speak, understand, and even have the notion of “self” and use the word “I” appropriately. But as humans we have difficulty accepting the idea that robots should ever be able to FEEL anything. After all, they are mere MACHINES!

Philosophers also have difficulty with the problem of FEEL, which they often refer to as the problem of QUALIA, that is, the perceived quality of sensory experience, the basic “what it’s like” of say, red, or the touch of a feather, or the prick of a pin. Understanding qualia or feel is what the philosopher David Chalmers and what Daniel Dennett call: the “hard problem” of consciousness.

Let’s try and look at what is so difficult about understanding feel. The first step would have to be to try to define what we really mean when we talk about feel. 

Suppose I look at a red patch of color: I see red. What exactly is this feel of red? What do I experience when I feel the feel of red? 

I would say that first of all there are cognitive states : mental associations like roses, ketchup, blood, red traffic lights, and knowledge about how red is related to other colors: for example that it’s similar to pink and quite different from green. Other cognitive states are the thoughts and linguistic utterances that seeing redness provoke, as well as the plans, decisions, opinions or desires that seeing redness may give rise to.

Now surely all this can be built into a robot. Perhaps today, not yet in such a sophisticated fashion as with humans. But in the future, since symbolic processing cognitive processing are the very subject matter of Artificial Intelligence, having cognitive states like this is within the realm of robotics. So presumably here there is no logical problem.

Behavioral reactions are a second aspect of what it is like to have a feel. There are automatic reactions like a good driver pressing on the brake at a red traffic light when I drive. There may be physiological tendencies involved in seeing red: perhaps redness changes my general physical state, making me more excited as compared to what happens when I gaze at a cool blue sky.

Both automatic bodily reactions and physiological tendences, to the extent that the robot has a body and can be wired up so as to react appropriately, should not be too difficult to build into the robot. So here too there is no logical problem, although it may take us a few more decades to get that far.

But the trouble is all these components of feel seem to most people not to constitute the RAW feel of red itself. Most people will say that they are CAUSED by the fact that we feel red. They are extra components, products, or add-ons of something that most people think exists, namely the primal, raw feel of red itself, which is at the core of what happens when I look at a red patch of colour.

I must admit that this notion of “raw feel – experience” is a surprise to me. I can’t say that I’ve ever conceived of “red” as other than a visual perception of light that is reflected from whatever material has been “made red”.

Does raw feel really exist? Certainly many people would say that we have the impression it exists since otherwise there would be “nothing it’s like” to have sensations. We would be mere machines or “zombies”, empty vessels making movements and reacting with our outside environments, but there would be no inside “feel” to anything.

Other people, most notably the philosopher Daniel Dennett, considers that raw feel does not exist, and that it is somehow just a confusion in our way of thinking.

But the important point is that EVEN IF DENNETT IS RIGHT, and that raw feel does not really exist, something still needs to be explained. Even Dennett must agree that there is something special about sensory experience that at least MAKES IT SEEM TO MANY PEOPLE like raw feel exists!

Let me look at what this something special is.

Ineffability is really what springs to mind first as being peculiar about feel: namely the fact that because they are essentially first person, ultimately it is impossible to communicate to someone else what feels are like.

Is ineffability an idea that disturbs social humans? That is, all “experience” must be identical and sharable, and even if this “communal perception” is impossible, it MUST BE AGREED TO ANYWAY?  

I remember as a child asking my mother what a headache was like and never getting a satisfactory answer until the day I actually got one. Many people have asked themselves whether their spouse or companion see colors the same way as they do!

This ineffability has led many people to conclude that an EXTRA THEORETICAL APPARATUS will be needed to solve the problem of the “what-it-is-like” of experience. 

Here I will interject that this is the function of verbal language: to at least provide the illusion that “experience” is socially shared: see Theory of mind = fictional mind- reading – a “belief” that is propped up by socially enforced language conventions. 

This ineffability of raw feels is a first critical aspect of raw feels that we need to explain.

But even if we can’t describe feels, there is one thing we know, namely that they are all different from one another.

Sometimes this difference allows for no comparison. Vision and hearing are different. How are they different? Difficult to say… For example there seems to be no basis for comparing the red of red with the sound of a bell. Or the smell of an onion with the touch of a feather. When in this way sensations can’t be compared, we say that they belong to different modalities: vision, hearing, touch, smell, taste…

But within sensory modalities, experiences can be compared, or at least structured. Austen Clark in his brilliant book Sensory Qualities looks at this in detail. For example, we can make comparisons on comparisons, and observe that for example red is more different from green than it is from pink.

By compiling such comparisons we can structure sensory qualities and notice that sometimes they can be conveniently organised into dimensions. Dimensions can sometimes be linear going from nothing to infinity as when we go from no brightness to very very bright, or from complete silence to very very loud. Sometimes they go from minus infinity to plus infinity, as from very very cold to very very hot. Sometimes sensations need circular dimensions to describe them adequately, as when we go from red to orange to yellow to green to blue to violet and back to red.

Sometimes, as in the case of smell, as many as 30 separate dimensions seem to be necessary to describe the quality of sensations.

Can such facts be accounted for in terms of neurophysiological mechanisms in the brain?

The simplest example is something like sound intensity. I ask you to reflect on this carefully. If we should find that perceived sound intensity correlates perfectly with neural activation in a particular brain region, with very strong perceived intensity corresponding to very high neural activation. At first this seems like a satisfactory state of affairs, approaching what a neurophysiologist would conceive of as getting very close to an explanation of perceived sound intensity. BUT IT SIMPLY IS NOT!!

For WHY should highly active neurons give you the sensation of a loud sound, whereas little activation corresponds to soft sound?? Neural activation is simply a CODE. Just pointing out that there is something in common between the code and the perceived intensity is not an explanation at all. The code could be exactly the opposite and be a perfectly good code.

Let’s take the example of color.

Color scientists since Oswald Hering at the end of the 19th Century have known that an important aspect of the perceptual structure of colors is the fact that their hues can be arranged along two dimensions: a red-green axis and a blue yellow axis. Neurophysiologists have indeed localized neural pathways that seem to correspond to these perceptual axes. The trouble is: what is it about the neurons in the red-green channel that give that red or green feeling, whereas neurons in the blue-yellow channel provide that yellow or blue feeling?

Another issue concerns the perceived proximity of colors and the similarity of corresponding brainstates.

Suppose it turned out that some brainstate produces the raw feel of red and furthermore, that brain states near that state produce feels that are very near to red. This could happen in a lawful way that actually corresponds to people’s judgments about the proximity of red to other colors.

The trouble is, what do you mean by saying a brainstate is near another brainstate? Brain states are activities of millions of neurons, and there is no single way of saying this brain state is “more in the direction” of another brain state. Furthermore, even if we do find some way of ordering the brain states so that their similarities correspond to perceptual judgments about similarities between colors, then we can always ask: why is it this way of ordering the brain states, rather than that, which predicts sensory judgments?)

So in summary concerning the structure of feels: this is a second critical aspect of feels we need to explain.

And now for what philosophers consider to be perhaps the most mysterious thing about feels. They “feel like something”, rather than feeling like nothing. We all believe that there is “nothing it’s like” to a mere machine to capture a video image of red, whereas we really have the impression of seeing redness. Still, though it does somehow ring true to say there is this kind of “presence” to sensory stimulation, the notion is elusive. It would be nice to have an operational definition. Perhaps a way to proceed is by contradiction.

Consider the fact that your brain is continually monitoring the level of oxygen and carbon dioxide in your blood. It is keeping your heartbeat steady and controlling other bodily functions like your liver and kidneys. All these activities involve biological sensors that register the levels of various chemicals in your body. These sensors signal their measurements via neural circuits and are processed by the brain. And yet this neural processing has a very different status than the pain of the needle prick or the redness of the light: Essentially whereas you feel the pain and the redness, you do not feel any of the goings-on that determine internal functions like the oxygen level in your blood. (Or do we?) The needle prick and the redness of the light are perceptually present to you, whereas states measured by other sensors in your body also cause brain activity but generate no such sensory presence.

You may not have previously reflected on this difference, but once you do, you realize that there is a profound mystery here. Why should brain processes involved in processing input from certain sensors (namely the eyes, the ears, etc.), give rise to a felt sensation, whereas other brain processes, deriving from other senses (namely those measuring blood oxygen levels etc.) do not give rise to a felt sensation?

But what about thinking or imagining?

Clearly, like the situation for sensory inputs, you are aware of your thoughts, in the sense that you know what you are thinking about or imagining, and you can, to a large degree, control your thoughts. But being aware of something in this sense of “aware” does not imply that that thing has a feel. Indeed I suggest that as concerns what they feel like, thoughts are more like blood oxygen levels than like sensory inputs: thoughts are not associated with any kind of sensory presence. Your thoughts do not present themselves to you as having a particular sensory quality. A thought is a thought, and does not come in different sensory shades in the way that a color does (e.g. red and pink and blue), nor does it come in different intensities like a light or a smell or a touch or a sound might. (I find this assertion to be odd.)

To conclude: perhaps a first step toward finding an operational definition of what we mean by “raw feels feel like something” is to note that this statement is being made in contrast with brain processes that govern bodily functions, and in contrast with thoughts or imaginings: neither of these impose themselves on us like sensory feels, which are “real” or “present”.

This is mystery number three concerning raw feel.

To summarize then: we have three characteristics of raw feel which are mysterious and seem not to be able to be explained from physico-chemical mechanisms.

This is the hard problem of consciousness.

Under the view that there is something in the brain which generates “feel”, we are always led to an infinite regress of questions, and ultimately we are left to invoking some kind of dualistic magic in order to account for the what it’s like of feel. 

But there is a different view of what feel is which eliminates the infinite regress. This “sensorimotor” view takes the stance that it is an error to think of feels as being the kind of thing that is generated by some physical mechanism, and a fortiori then, it is an error to look in the brain for something that might be generating feel.

Instead the sensorimotor view suggests that we should think of feel in a new way, namely as a way of interacting with the world. (That assertion agrees with my working definitions of MIND and CULTURE:  

MIND is the sum of an organism’s REACTIONS to the environment. CULTURE is the sum of an organism’s (or group’s) INTERACTIONS with the environment.

This may not make very much sense at first, so let’s take a concrete example, namely the example of softness.

Where is the softness of a sponge generated? If you think about it, you realize that this question is ill posed. The softness of the sponge is surely not the kind of thing that is generated anywhere! Rather, the softness of the sponge is a quality of the way we interact with sponges. When you press on the sponge, it cedes under our pressure. What we mean by softness is that fact.

Note that the quality of softness is not about what we are doing right now with the sponge. It’s a fact about the potentialities that our interaction with the sponge present to us. It’s something about the various things we could do if we wanted to.

So summarizing about the quality of feels: the sensorimotor view takes the stance that the quality of a feel is constituted by the law of sensorimotor interaction that is being obeyed as we interact with the environment.

But note that something more is needed. It is not sufficient to just be engaged in a sensorimotor interaction with the world for one to be experiencing a feel. We need additionally to be attending, cognitively accessing the fact that we are engaged in this way. I’ll be coming back to what this involves at the end of the talk. For the moment I want to concentrate on the quality of the feel, and leave to the side the question of what makes the feel “experienced” by the person.

Let’s look at how taking the sensorimotor view explains the three mysteries of feel that I’ve defined earlier: the ineffability, the structure, and the presence.–

Ineffability: Obviously when you squish a sponge there are all sorts of muscles you use and all sorts of things that happen as the sponge squishes under your pressure. It is inconceivable for you to have cognitive access to all these details. It’s also a bit like when you execute a practised skiing manoeuver, or when you whistle: you don’t really know what you do with your various muslces, you just do the right thing.

The precise laws of the interaction are thus ineffable, they are not available to you, nor can you describe them to other people.

Applied to feels in general, we can understand that the ineffability of feels is therefore a natural consequence of thinking about feels in terms of ways of interacting with the environment. Feels are qualities of actually occurring sensorimotor interactions which we are currently engaged in. We do not have cognitive access to each and every aspect of these interactions.

Qualities have structure. Now let’s see how the sponge analogy deals with the second mystery of feel, namely the fact that feels are sometimes comparable and sometimes not, and that when they are comparable, they can sometimes be compared along different kinds of dimensions.

Let’s take sponge squishing and whistling as examples.

The first thing to notice is that there is little objectively in common between the modes of interaction constituted by sponge squishing and by whistling.

On the other hand there is clear structure WITHIN the gamut of variations of sponge-squishing: some things are easy to squish, and other things are hard to squish. There is a continuous dimension of softness. (resistence)

Furthermore, what we mean by softness is the opposite of what we mean by harndess. So one can establish a continuous linear dimension going from very soft to very hard.

So here we have examples that are very reminiscent of what we noticed about raw feels: sometimes comparisons are nonsensical, as between sponge squishing and driving, and sometimes they are possible, with dimensions along which feels can be compared and contrasted.

Whereas we could not explain the differences through physiology, if we reason in terms of sensorimotor laws, these properties of feel fall out naturally.–

Let’s look at some applications of these ideas to real raw feels.

If I’m right about the qualities of feels, then we can explain why they are the way they are, not in terms of different brain mechanisms that are excited, but in interms of the different laws that govern our interaction with the environment when we have the different feels.

So for example: where lies the difference between hearing and seeing? It does not lie in the fact that vision excites the visual cortex and hearing the auditory cortex.

It lies in the fact that when you see and you blink, there is a big change in sensory input, whereas nothing happens when you are hearing and you blink.

It lies in the fact that when you see and you move forward, there is an expanding flow field on your retinas, whereas the change obeys quite different laws in the auditory nerve.

Now if this is really the explanation for differences in the feel associated with different sensory modalities. Then it makes a prediction: it predicts that you should be able to see, for example, through the auditory or through the tactile modality, providing things are arranged such that the appropriate sensorimotor dependencies are created.

This is the idea of Sensory Substitution. Paul Bach y Rita in the 1970’s had already hooked up a video camera worn by a blind person on their spectacles through some electronics to an array of 20 by 20 vibrators that the blind person wore on their stomach or back. He had found that immediately on using the device, observers were able to navigate around the room, and had the impression, of an outside world, rather than feelings of vibration on the skin. With a bit more practice they were able to identify simple objects in the room. There are reports of blind people referring to the experience as “seeing”.

With modern electronics, sensory substitution is becoming easier to arrange and a variety of devices are being experimented with.

Bach y Rita and his collaborators have developed a tongue stimulation device which, though it has low resolution, has proven very useful in substitution vestibular information.–

There is work being done on Visual to Auditory substitution, where information from a webcam is translated into a kind of “soundscape” that can be used to navigate and identify objects. A link to a movie showing how a subject learns to use such a device is:–

There is even an application written in collaboration with Peter Meijer who invented this particular vision-to-sound system that works on some Nokia phones.–

Peter König and his group at Osnabrück have been experimenting with a belt that provides tactile vibrations corresponding to the direction of north. The device, when worn for several weeks, is, he says, unconsciously made use of in people’s navigation behavior, and becomes a kind of 6th sense!

In conclusion on this section concerning the structure of the qualities of feel, we see that the idea that qualities are constituted by the laws of sensorimotor dependency that characterise the associated interactions with the world, is an idea that makes interesting predictions that have been verified as regards sensory substitution.

I now come to another application of the idea, namely to the question of color.

Taking a break… more later. 

Back again!

Color is the philosopher’s prototype of a sensory quality. In order to test whether the sensorimotor approach has merit, the best way to proceed seemed to us to be to see if we could apply it to color.

At first it seems counterintuitive to imagine that color sensation has something to do with sensorimotor dependencies: after all, the redness of red is apparent even when one stares at a red surface without moving at all. But given the benefit, as regards bridging the explanatory gap, of applying the theory to color, I tried to find a way of conceiving of color in a way that was “sensorimotor”.

With my doctoral student David Philipona we realized that this could be done by considering not colored lights, but colored surfaces. Color scientists know that when you take a red surface, say, and you move it around under different lights, the light coming into your eyes can change dramatically. For example in an environment composed mainly of blue light, the reflected light coming off a red surface can only be blue. There is no red light coming off the surface, and yet you see it as red.

The explanation for this surprising fact is well known to color scientists, but not so well known to lay people, who often incorrectly believe that color has something mainly to do with the wavelength of light coming into the eyes. (Me too!) In fact what determines whether a surface appears red is the fact that it absorbs a lot of short wavelength light and reflects a lot of long wavelength light. But the actual amount of short and long wavelength light coming into the eye at any moment will be mainly determined by how much there is in the incoming light, coming from illumination sources. (??)

Thus what really determines perceived color of a surface is the law that links incoming light to outgoing light. Seeing color then, involves the brain figuring out what that law is. The obvious way to do this would be by sampling the actual illumination, sampling the light coming into the eye, and then, based on a comparison of the two, deducing what the law linking the two is. (I’m a bit lost!)

This is illustrated in this figure. The incoming light is sampled by the three types of photoreceptors in the eye, the L, M and S cones. Their response can be represented as a vector in a three dimensional space. When the incoming light bounces off the surface, the surface absorbs part of it, and reflects the rest. This rest can then be sampled again by the eye’s three photoreceptor cone types, giving rise to another three-vector.

It turns out that the transformation of the incoming three vector to the outgoing three vector can be very accurately described by a 3 x 3 matrix. This matrix is a property of the surface, and is the same for all light sources. It constitutes the law that we are looking for, namely the law that describes how incoming light is transformed by this surface.

It is very easy to calculate what the 3 x 3 matrices are for different surfaces. My mathematician David Philipona did this simply by going onto the web and finding databases of measurements of surface reflectivity, databases of light spectra (like sun light, lamp light, neon light, etc.) and figuring out what the matrices were.

Of course human observers, when they judge that a surface is red don’t do things this way. One way they could do it is to experiment around a little bit, moving the surface around under different lights, and ascertaining what the law is by comparing inputs to outputs. So in that respect the law can be seen as being a sensorimotor law. In many cases however humans don’t need to move the surface around to establish the law: this is probably because they know more or less already what the incoming illumination is. But in case of doubt, like when you’re in a shop under peculiar lighting, it’s sometimes necessary to go out of the shop with a clothing article to really know what color it is.

Here are a set of colored chips called Munsell chips which are often used in color experiments. Their reflectance spectra are available for download off the web, and we applied David Philipona’s method to calculate the 3 x 3 matrices for all these chips. What did this give? Lots of number, obviously.

But when we looked more closely at the matrices, we discovered something very interesting. Some of the matrices had a special property: they were what is called singular. What this means is that they have the property that instead of taking input three vectors and transforming them into output vectors that are spread all over the three dimensional space of possibilities, these matrices take input three vectors and transform them into a two dimensional or a one dimensional subspace of the possible three dimensional output space. In other words these matrices represent input output laws tht are in some sense simpler than the average run of the mill matrices.

Here is a graph showing the degree of singularity of the matrices corresponding to the different Munsell chips.

You see that there are essentially four peaks to the graph, and they correspond to four Munsell chips, namely those with colors red, yellow green and blue.

And this reminds us of something.

In the 1970’s, two anthropologists at Berkeley, Brent Berlin and Paul Kay, studied which colors people in different cultures have names for. They found that there were certain colors that were very frequently given names. Here is the graph showing the number of cultures in the so-called “World Color Survey” which had a name for each of the different Munsell chips.

And here we see something very surprising. The peaks in this graph, derived from anthropological data, correspond very closely to the peeks in the graph I just showed you of the singularity of the Munsell chips.

Here I have superimposed contour plots of the two previous graphs. You see that the peaks of the black contour plots of the singularity data correspond to within one chip of the anthropological data, shown as flat colored areas.

As though those colors which tend to be given names, are precisely those simple colors that project incoming light into smaller dimensional subspace of the three dimensional space of possible lights.

It’s worth mentioning that Berlin and Kay, and more recently Kay and Regier have been seeking explanations of their anthropological findings. Though there are some current explanations based on a combination of cultural and perceptual effects, which do a good job of explaining the boundaries between different color names, no one up to now has been able to explain the particular pattern of peaks of naming probabibility, as we have here. And in particular, the red/green and blue/yellow opponent channels proposed on the basis of Hering’s findings do not provide an explanation.

On the other hand it does seem reasonable that names should most frequently be given to colors that are simple in the sense that when you move them around under different illuminations, their reflections remain particularly stable compared to other colors.

So in my opinion the finding that we are able to so accurately predict color naming from first principles, using only the idea of the sensorimotor approach, is a great victory for this approach.

There is another quite independent victory of the sensorimotor approach to color that concerns what are called unique hues. These are colors that are judged by people to be pure, in the sense that they contain no other colors. There is pure red, green, yellow and blue, and people have measured the wavelengths of monochromatic light which provide such pure sensations.

Unfortunately, the data are curiously variable, and seem to have been changing gradually over the last 50 years. Furthermore, the data have not been explained from neurophysiological red/green and yellow/blue opponent channels.

The dots in this graph show empirical data on channel activations observed to obtain unique red, yellow, green and blue. Instead of crossing at right angles, the data are somewhat skewed.–

What this means is that, for example, in order to get the sensation of absolutely pure red, you cannot just have the red channel that is maximally active. You have to have a little bit of activation also in the yellow channel.–

Similarly, to get unique blue, you need not just activation in the blue channel, but also some activation in the green channel.

On the other hand it is easy, on the basis of the matrices that we have calculated for colored chips, to make predictions about what people will judge to be pure lights. And these predictions turn out to be right spot on the empirical data on observed unique hues. In fact the black lines in the graph are the predictions from the sensorimotor approach.

Another fact about unique hues is their variability. The small colored triangles on the edge of the diagram here on the right shows the wavelengths measured in a dozen or so different studies to correspond to unique red, yellow, blue and green. You see the data are quite variable. The colored lines are the predictions of variability proposed by the sensorimotor approach. Again, the agreement is striking.

Incidentally we can also account for the fact that the data on unique hues has been changing over recent years. We attribute this to the idea that in order to make the passage from surfaces to lights, people must have an idea of what they call natural white light. And this may have been changing because of the transition from incandescent lighting to neon lighting used more often today.

As a final point about color and the sensorimotor approach, I’d like to mention some experiments being done by my ex PhD student Aline Bompas. Here she is wearing what looks like trendy psychedelic spectacles.

The effect of these spectacles is to make it so that when she looks to the right, everything is tinged with yellow, and when she looks to the left, everything is tinged with blue. Now under the sensorimotor theory, this is a sensorimotor dependency which the brain should learn and grow accustomed to. After a while, we predict, people wearing such spectacles should come to no longer see the color changes. Furthermore, once people are adapted, if they then take off the spectacles, they should then see things tinged the opposite way. For example if they look at a grey spot on a computer screen, when they turn their head one way or the other they should have to adjust the spot to be more yellowish or more blueish for it to appear grey. And this should happen, despite the fact that they are not wearing any spectacles at all.

Aline Bompas has been doing interesting experiments which do indeed confirm this kind of predictions. Another application of the sensorimotor approach to the qualities of sensation concerns body sensation.

Why is it that when I touch you on the arm, you feel it on your arm? You might think that the answer has something to do with the fact that sensory stimulation is relayed up into the brain into a somatosensory map in parietal cortex, where different body parts are represented, as in the well-known Penfield homunculus.

But the fact is that this doesn’t explain anything. The fact that a certain brain area represents a certain body part doesn’t explain what it is about that brain area which gives you sensation in that particular body part. What is it about the arm location in the somatosensory map which gives you that “arm” feeling rather than, say, the foot feeling, or any other feeling?

The sensorimotor approach claims on the contrary that what constitutes the feel of touch on the arm is a set of potential changes that could occur: the fact that when you move your arm when your arm is being touched, it changes the incoming tactile stimulation, whereas when you move, say your foot when you’re being touched on your arm doesn’t change anything. The fact that when you look at your arm when you’re being touched on your arm, you’re likely to see something touching it, whereas if you look, say, at your foot, you’re not likely to see anything touching it.

What constitutes that arm-feel is the set of all such potential sensorimotor dependencies.

Now if this is true, it makes an interesting prediction. It predicts that if we were to change the systematic dependencies, then we should be able to change the associated feel. This is exactly what is done in the Rubber Hand Illusion.

In the RHI a person watches a rubber hand being stroked while at the same time their own real hand is stroked simultaneously. Most people after a few minutes get the peculiar impression that the rubber hand belongs to them. This is measured by a questionnaire and by a behavioural response, which is to indicate the felt position of the index finger.

This result is very much in keeping with the predictions of the sensorimotor approach. With my student Camila Valenzuela-Moguillansky we are working on this phenomenon, in particular with regard to pain. As shown in the lower figure, if we simultaneously stimulate the real hand and the rubber hand with a painful heat stimulus, when people have transferred ownership of their hand to the rubber hand, they feel less pain in the real hand.

It’s also possible to use different size rubber hands, as here, and give people the impression that their real hands are bigger or smaller than they really are.–

Yet another application of the sensorimotor approach I would like to mention concerns the perception of space. In this work, done with mathematician student David Philipona, we showed how it’s possible for a brain to deduce the algebraic group structure of three dimensional space by looking at the sensorimotor laws linking sensory input to motor output. We did this for the case of a simulated rat, which had eyes and head that moved, pupils that contracted. The sensory input was multimodal and came from vision, audition and from tactile input from the whiskers. We have been looking further into how this approach might help in robotics for multimodal sensory fusion and calibration.–

Up to now I’ve looked at how the sensorimotor approach can deal with the ineffability and the structure of the qualities of feel.

Now let’s look at the third mystery of feel, the question of presence i.e. of why people say “there’s something it’s like” to have a feel.

I had already mentioned the difficulty of really understanding what it means to say that there’s something it’s like to feel, and that to solve this problem we might use an operational definition and proceed by contradiction. This could consist in noting that there are some processes in the brain and nervous system like automatic functions, on the one hand, and thoughts on the other hand, which presumably do not possess the mysterious “something it’s like”.

The difficulty with the traditional way of thinking about feel as being generated by the brain is that there seems to be no way we could conceive of why autonomic and thought mechanisms could generate no feel, whereas sensory mechanisms would.

Under the sensorimotor view, on the other hand, we are no longer searching for physical mechanisms which generate feel. So instead of searching for physical or physiological mechanisms that do or do not generate the “something it’s like”, we can search for characteristics of our interaction with the environment of which one could say that they correspond to the notion of feeling like something.

If you ask yourself, let’s say in the case of feeling the softness of a sponge why there’s something it’s like to do this, I think you come to the conclusion that the reason there’s something it’s like is pretty obvious: you really are doing something, not just thinking about it… or letting your brain deal with it automatically.

But then what is it about a real interaction with the world that allows you to know that it you really are having such a real interaction? How do you know, when you’re squishing a sponge, that you REALLY ARE squishing it, and not just thinking about it, hallucinating or dreaming about it? The answer I think lies with four aspects of real-world interactions which are: Richness, bodiliness, insubordinateness and grabbiness.

First of all, the world is rich in details. There is so much information in the world that you cannot possibly imagine it. If you’re just thinking about squishing a sponge, you cannot imagine all the different possible things that might happen when you press here or there. If you’re imagining a visual scene, you need to rely on your own inventivity to imagine all the details. But if you really are looking at a scene, then wherever you look, the world provides infinite detail.

So richness is a first characteristic of real-world interactions that distinguishes them from imagining or thinking about them.

Bodiliness is the fact that voluntary motions of your body systematically affect sensory input. This is an aspect of sensory interactions which distinguishes them from autonomic processes in the nervous system and from thoughts.

Sensory input deriving from visceral autonomic pathways is not generally affected by your voluntary actions. Your digestion, your heartbeat, the glucose in your blood, although they do depend somewhat on your movements, are not as intimately linked to them as your sensory input from your visual, auditory and tactile senses. If you are looking at a red patch and you move your eyes, etc., then the sensory input changes dramatically. If you are listening to a sound, any small movement of your head immediately changes the sensory input to your ears in a systematic and lawful way. If you’re thinking about a red patch of color or about listening to a sound, then moving your eyes, your head, your body, does not alter the thought.

Note that the idea that bodiliness should be a test of real sensory interactions is related to the fact that people often say that a way of testing whether you are dreaming is to make a voluntary action that has an effect on the environment, like switching on a light.

But note now the interesting case of proprioception. Here is a case where we definitely have bodiliness, since voluntary limb movements do systematically affect incoming proprioception. On the other hand, I don’t think proprioception really is felt in the same way that other sensory feels are felt.

Bodiliness by itself seems therefore not to be a guarantee that a sensation will be felt.

Indeed the reason bodiliness is not a perfect guarantee of a sensation being real is that for it to be real, bodiliness must actually be incomplete. This is because what characterises sensations coming from the world is the fact that precisely they are not completely determined by our body motions. The world has a life of its own, and things may happen: mice may move, bells may ring, without us doing anything to cause this. I call this insubordinateness. The world partially escapes our control.–

And then there is grabbiness. This is the fact that sensory systems in humans and animals are hard-wired in such a way as to peremptorily interfere with cognitive processing. What I mean is that when there is a sudden flash or loud noise, we react, automatically by orienting our attention towards the source of interruption. This fact is an objective fact about the way some of our sensors — namely precisely those that we say we feel, are wired up. Visual, auditory, tactile olfactory and gustatory systems possess sudden change (or “transient”-) detectors that are able to interrupt my ongoing cognitive activities and cause an automatic orienting response. On the other hand a sudden change in my blood sugar or in other autonomic pathways like a sudden vestibular or proprioceptive change, will not cause exogenous orienting. Of course such changes may make me fall over, or become weak, for example, but they do not directly prevent my cognitive processing from going on more or less as normal — although there may be indirect effects of course through the fact that I fall over or become weak.

My idea is that what we call our real sense modalities are precisely those that are genetically hard wired with transient detectors, so as to be able, in cases of sudden change, to interrupt our normal cognitive functioning and cause us to orient towards the change. Those other, visceral, autonomic sensing pathways, are not wired up this way. It is as though normal sense modalities can cause something like a cognitive “interrupt”, whereas other sensing in the nervous system cannot.

Note that grabbiness allows us also to understand why thoughts are not perceived as real sensations. If you are seeing or hearing something, any change in the environment immediately creates a signal in the transient detectors and alerts you that something has happened. But imagine that overnight neurons die in your brain that code the third person of the latin verb “amo”. Nothing wakes you up to tell you this has happened. To know it, you have to actually think about whether you still remember the third person of amo. In general, except in the case of obsessions, thoughts and memory do not by themselves interrupt your cognitive processing in the way that loud noises and sudden flashes or pungent smells cause automatic orienting.

Grabbiness is particularly important in providing sensory feel with its “presence” or “what it’s like”. I would like to illustrate this with the example of seeing.

When we see a visual scene, we have the impression of seeing everything, simultaneously, continuously, and in all its rich, detailed splendor. The visual scene imposes itself upon us as being “present”. Part of this presence comes from the richness, bodiliness and insubordinateness provided by vision. The outside world is very detailed, much more so than any imaginable scene. It has bodiliness because whenever we move our eyes or body, the input to our eyes changes drastically. And it is insubordinate because our own movements are not the only thing that can cause changes in input: all sorts of external changes can also happen.

But there is also grabbiness. Usually, if something suddenly changes in the visual scene, because transient detectors in the visual scene automatically register it and orient your attention to it, you see the change, as in this movie:

But if you make the change so slow that the transient detectors don’t work, then an enormous change can happen in a scene without your attention being drawn to it, like in this movie: (I get a can’t play message) 

Where almost a third of the picture changes without you noticing it.–

Another way of preventing transient detectors from functioning normally is to flood them with additional transients, like here where the many white “mudsplashes” prevent you noticing the transient which corresponds to an important picture change. (Actually, I can perfectly well see the large tree appear and disappear at central right regardless of blocked areas appearing and disappearing)

These ‘slow change’ and mudsplash demonstrations are part of a whole literature on “change blindness”. Change blindness can also occur if the interruption between scenes that causes the transients to be drowned out is caused by flicker in the image, or by eye saccades, blinks, or film cuts, or even by real life interruptions.

So to summarize up to now:

I have shown how the new view of feel as a sensorimotor interaction with the environment can explain the three mysteries of feel: its ineffability, the structure of its qualities, its presence. These are all explicable in terms of objective aspects of the sensorimotor laws that are involved when we engage in a sensorimotor interaction with the environment.

But these are all aspects of the QUALITY of feel. You may note that I have not at all talked about how when you have a feel … you can have the impression of consciously experiencing that feel??–

But I think this poses no theoretical problem. I would like to claim that what we mean by consciously experiencing a feel is: cognitively accessing the quality of the sensorimotor interaction we are currently engaged in.

As an example, take the opposite case:

Take driving down the highway as you think of something else. When you do this you would not say you are in the process of experiencing the driving feeling. For you to actually experience something you have to be concentrating your attention on it, you have to be cognitively engaging in the fact that you are exercising the particular sensorimotor interaction involved.

Illustrations of the role of attention in perception are well known in psychology.

One very impressive, practical application of this is in the domain of traffic safety. There is a phenomenon known to researchers studying road accidents called “LBFTS”: “Looked but failed to see”. It turns out that LBFTS is the second most frequent cause of road accidents after drunken driving. The phenomenon consists in the fact that the driver is looking straight at something, but for some reason doesn’t see it.

Particularly striking cases of this occur at railway crossings. You might think that the most frequent accident at a railway crossing would be the driver trying to get across the track quickly right before the trian comes through. But in fact it’s found that the most frequent cause of accidents at railway crossings is exactly the opposite: the train is rolling quietly across the crossing and a driver comes up and, although he is presumably looking straight ahead of him at the moving train, simply doesn’t see it, and crashes directly into it. If you do a search on the net for “car strikes train” you’ll find hundreds of examples in local newspapers like this one.–

This shows that what you look at does not determine what you see. Here’s another example: you may think it says here “The illusion of seeing”. Look again.There are actually two “of”‘s. Sometimes people take minutes before they discover this.


The reason is that seeing is not passively receiving information on your retina. It is interrogating what’s on your retina and making use of it. If your interrogation assumes there’s only one word “of”, then you simply don’t see that there are two.If you’re driving across the railway crossing, even though your eyes are on the train, if you’re thinking about something different, you simply don’t see the train, and … bang.

Psychologies are of course very interested in attention, and do interesting experiments to test your ability to put your attention on something when all sorts of other things are going on in the visual field.

Here’s an example made by my ex student Malika Auvray, where you have to follow the coin under the cup. It’s a bit tricky because there are lots of hands and cups all moving around: so concentrate!

At the end of the sequence : did you see anything bizarre? It was the green pepper replacing one of the cups. Many people don’t notice this at all, presumably because they’re busy following the coin. And this is despite the fact that the green pepper is in full view and perfectly obvious.–

The demo I just showed is a poor version of a truly wonderful demo made by Dan Simons, where a gorilla walks through a group of people playing a ball game, and where you simply don’t see the gorilla even though it’s in full view.

Transport for London has a reworked version of this that they use as an advertisement for people to drive carefully, and you can find it on youtube:

So in conclusion up to now:

Consciously experiencing a feel requires you first to be engaged in the skill implied by that feel. If the skill has the properties that sensory feel have, that is, if it has richness, bodiliness, insubordinateness and grabbiness, then the quality it will have the sensory presence or “what it’s like” that real sensory feels possess.

If then you are attending, or cognitively accessing the feel, you will be conscious that you are doing so.

But wait, there’s a problem: who is “you”?!

It doesn’t make much sense to say that a person or an agent is consciously experiencing the feel, unless the person or agent exists as a person, that is unless the agent has what we call a SELF.–

Is this a problem for science? Philosophers have looked carefully at the problem posed by the notion of self and come to the conclusion that though the problem is tricky, it is not a “hard” problem in the same sense as the problem of feel was.

One aspect of the self is what could be called the cognitive self, which involves a hierarchy of cognitive capacities.

At the simplest level is “self-distinguishing”, that is the ability for a system or organism to distinguish its body from the outside world and from the bodies of other systems or organisms.

The next level is “self-knowledge”. Self knowledge in the very limited sense I mean here is something a bird or mouse displays as it goes about its daily activities. The animal exhibits cognitive capacities like purposive behavior, planning, and even a degree of reasoning. To do this its brain must distinguish its body from the world, and from other individuals. On the other hand the bird or mouse as an individual presumably has no concept of the fact that it is doing these things, nor that it even exists as an individual.

Such knowledge of self-knowledge is situated at the next level of my classification. Knowledge of self-knowledge can lead to subtle strategies that an individual can employ to mislead another individual, strategies that really are seen only in primates.

Knowledge of self-knowledge is most typically human, and may have something to do with language. (It has everything to do with language!) It underlies what philosopher Daniel Dennett calls the “intentional stance” that humans adopt in their interactions with other humans. The individual can have a “Theory of Mind”, that is, it can empathize with others, and interpret other individuals’ acts in terms of beliefs, desires and motivations. This gives rise to finely graded social interactions ranging from selfishness to cooperation and involving notions like shame, embarrassment, pride, and contempt.

I have called all these forms of the cognitive self “cognitive” (not supernatural) because they involve computations that seem to be within the realm of symbol and concept manipulation. (Strange that although ASD Asperger people are ‘granted” cognitive “understanding, empathy, mind-reading” psychologists claim that this skill “doesn’t count” – only “hooky-spooky” non-cognitive mind reading counts. 

There seems to be no conceptual difficulty involved in building these capacities into a robot. It may be difficult today ((– particularly as we don’t know too well how to make devices that can abstract concepts. Furthermore we don’t currently have many robot societies where high level meta knowledge of this kind would be useful. ))

But ultimately I think the consensus is that there is no logical obstruction ahead of us.

So I think we can say that the cognitive self is…

Accessible to a robot

On the other hand there does still seem to be something missing. We as humans have the strong impression that there is someone, namely ourselves, “behind the commands”. ((We are not just automata milling around doing intelligent things: there is a pilot in the system, so to speak, and that pilot is “I”.)) (Asperger’s are constantly accused of being robotic, with no interior awareness. A “lie” that demonstrates that NTs are not empathetic.  Social empathy merely requires repeating scripted verbal expressions that are believed to be “signs and gestures” of genuine “emotional experience as sympathetic vibrations” of something – “mystical within” but which may lack ANY authentic comprehension of another person’s experience, either mystical” or “cognitive”. 

It is I doing the thinking, acting, deciding and feeling. How can the self seem so real to us, and who or what is the “I” that has this impression?

And here I want to appeal to current research in social and developmental psychology. Scientists in these fields agree that although we have the intimate conviction that we are an individual with a single unified self, the self is actually a construction with different, more or less compatible facets that each of us gradually builds as we grow up.

The idea is that the self is a useful abstraction that our brains use to describe, first to others and then later to ourselves, the mental states that “we” as individual entities in a social context have. It is what Dennett has called a narrative fiction.

But then how can the self seem to us to be so real? The reason is that seeming real is part of the narration that has been constructed. The cognitive construction our brains have developed is a self-validating construction whose primal characteristic is precisely that we should be individually and socially convinced that it is real.

It’s a bit like money: money is only bits of metal or paper. It seems real to us because we are all convinced that it should be real. By virtue of that self-validating fact, money actually becomes very real: indeed, society in its current form would fall apart without it.

The self is actually even more real than money because it has the additional property that it is self-referring: like some contemporary novels, the “I” in the story is a fiction the “I” is creating about itself.

In which case, shouldn’t we be able to change the story in mid course? If our selves are really just “narrative fictions” then we would expect them to be fairly easy to change, and by ourselves furthermore!

But actually this does not work. It is necessarily part of the very construction of the social notion of self, that we must be convinced that it is very difficult to change our selves. After all, society would fall apart if people could change their personalities from moment to moment.

But couldn’t we by force of will just mentally overcome this taboo? If the self is really just a story, changing the self should surely in fact be very easy. It turns out that we can under some circumstances break the taboo ((of thinking that our selves are impossible to change,)) and flip into altered states where we become different, or even someone else. Such states can be obtained voluntarily through a variety of “culturally bound” techniques like possession trances among others ((, ecstasies, channeling provoked in religious cults, oracles, witchcraft, shamanism, or other mystical experiences. Latah, amok, koro)) and hypnosis; or sometimes involuntarily under strong psychological stress ((physical abuse, brainwashing by sects, in religious cults and in war — Post traumatic stress disorder, Dissociative Identity disorder…)).

Hypnosis is interesting because it is so easy to induce, (is it?) confirming the idea that the self is a story we can easily control if we could only decide to break the taboo. Basic texts on hypnosis generally provide an induction technique that can be used by a complete novice to hypnotize someone else. This suggests that submitting to hypnosis is a matter of choosing to play out a role that society has familiarized us with, namely “the role of being hypnotised”. It is a culturally accepted loophole in the taboo, a loophole which allows people to explore a different story of “I”. An indication that it is truly cultural is that hypnosis only works in societies where the notion is known. You can’t hypnotise people unless they’ve heard of hypnosis. (??!!)

This is not to say that the hypnotic state is a pretense. On the contrary, it is a convincing story to the hypnotized subject, just as convincing as the normal story of “I”. So convincing, in fact, that clinicians are using it more and more in their practices, for example in complementing or replacing anesthesia in painful surgical operations.

There is also the fascinating case of Dissociative Identity Disorder (formerly called Multiple Personality Disorder). A person with Dissociative Identity Disorder may hear the voices of different “alters”, and may flip from “being” one or other of these people at any moment.

The different alters may or may not know of each others’ existence. The surprising rise in incidence of Dissociative Identity Disorder /MPD over the past decades signals that it is indeed a cultural phenomenon. Under the view I am taking here, Dissociative Identity Disorder /MPD is a case where an individual resorts to a culturally accepted ploy of splitting their identity in order to cope with extreme psychological stress. Each of these identities is as real as the other and as real as a normal person’s identity – since all are stories.

In summary, the rather troubling idea that the sense of self is a social construction seems actually to be the mainstream view of the self in social psychology.

If this view is correct, then we can confirm that there really is logically no obstacle to us understanding the emergence of the self in brains. Like the cognitive aspect of the self, the sense of “I” is a kind of abstraction that we can envisage would emerge once a system has sufficient cognitive capacities and was immersed in a society where such a notion would be useful. The self is: Accessible to a robot

So we can now finally come to the conclusion. The idea is that I have a conscious phenomenal experience when this social construct of “I” engages cognitively in the exercise of a skill. If the skill is a purely mental skill like thinking or remembering it will have no sensory quality. But if it involves a sensorimotor interaction with the environment, then it will have richness, bodiliness, insubordinateness, and grabbiness. In that case it will have the “presence” or “what it is likeness” of a sensory experience.

Notice that there are two different mechanisms involved here. The outside part, the knowing part is a cognitive thing, it involves cognitive processing, paying attention. There is nothing magical about this however, it is simply a mechanism that brings cognitive processing to bear on something so that that thing becomes available to one’s rational activities, to one’s abilities to make decisions, judgments, possibly linguistic utterances about something. It is perhaps what Ned Block calls access consciousness.

The inside part is the skill involved in a particular experience. It is something that you do. Your brain knows how to do it, and has mastery of the skill in the sense that it is tuned to the possible things that might happen when it …

The outside, cognitive part determines WHETHER you sense the experience.

The inside, skill part, determines WHAT the experience is like.

In summary, the standard view of what experience is supposes that it is the brain that creates feel. This standard view leads to the “hard” problem of explaining how physico-chemical mechanisms in the brain might generate something psychological, out of the realm of physics and chemistry. This explanatory gap arises because the language of physics and chemistry is incommensurable with the language of psychology.

The sensorimotor view overcomes this problem by conceiving of feel as a way of interacting with the environment. The quality of feel is simply an objective quality of this way of interacting. The language with which we describe such laws objectively and the language we use to describe our feels are commensurable, because they are the same language. What we mean when we say there is something it’s like to have a feel, can be expressed in the objective terms of richness, bodiliness, insubordinateness and grabbiness. What we mean when we say we feel softness or redness can be expressed in terms of the objective properties of the sensorimotor interaction we engage in when we feel softness or redness.–

For further information, here is the address of my web site.





How subtle sensory signals combine in the brain / Research

Looking for papers / articles about sensory processing that might provide clues as to ASD / Asperger sensory experience.

March 23, 2017, Brown University

New research explains how the developing brain learns to integrate and react to subtle but simultaneous sensory cues — sound, touch and visual — that would be ignored individually.

A new study describes a key mechanism in the brain that allows animals to recognize and react when subtle sensory signals that might not seem important on their own occur simultaneously. Such “multisensory integration” (MSI) is a vital skill for young brains to develop, said the authors of the paper in eLife, because it shapes how effectively animals can make sense of their surroundings.

For a mouse, that ability can make the difference between life and death. Neither a faint screech nor a tiny black speck in the sky might trigger any worry, but the two together strongly suggest a hawk is in the air. It matters in daily human life, too. An incoming call on a cell phone can be more noticeable when it is signaled visually and with sound, for example.

“It’s really important to understand how all of our senses interact to give us a whole picture of the world,” said study lead author Torrey Truszkowski, a neuroscience doctoral student at Brown University. “If something is super salient in the visual system — a bright flash of light — you don’t need the multisensory mechanism. If there is only a small change in light levels, you might ignore it — but if in the same area of visual space you also have a piece of auditory information coming in, then you are more likely to notice that and decide if you need to do something about that.”

To understand how that happens, Truszkowski and her team performed the new study in tadpoles. The juvenile frogs turn out to be a very convenient model of a developing MSI architecture that has a direct analog in the brains of mammals including humans.

Neuroscientists call the key property the tadpoles modeled in this study, the ability of brain cells and circuits to sometimes respond strongly to faint signals, “inverse effectiveness.” Study senior author Carlos Aizenman, associate professor of neuroscience and member of the Brown Institute for Brain Science, said the new paper represents, the first cellular-level explanation of inverse effectiveness, a property of MSI that allows the brain to selectively amplify weak sensory inputs from single sources and that represent multiple sensory modalities.”

Tadpole trials

To achieve that explanation at the level of cells and proteins, the researchers started with behavior. Tadpoles swimming in a laboratory dish will speed up — as if startled — when they detect a strong and sudden sensory stimulus, such as a pattern of stripes projected from beneath or a loud clicking sound. In their first experiment, the researchers measured changes in swimming speed when they provided strong stimuli, then weaker stimuli, and finally weaker stimuli in combination.

What they found is that more subtle versions of the stimuli — for example, stripes with only 25 percent of maximum contrast — barely affected swim speed when presented alone. But when such subtle stripes were presented simultaneously with subtle clicks, they produced a startle response as great as when full-contrast stripes were projected on the dish.

To understand how that works in the brain, the researchers conducted further experiments where they made measurements in a region called the optic tectum where tadpoles process sensory information. In mammals such as humans, the same function is performed by cells in the superior colliculus. The tadpole optic tectum sits right at the top of the brain. Given that fortuitous position and the animals’ transparent skin, scientists can easily observe the activity of cells and networks in living, behaving tadpoles using biochemistry to make different cells light up when they are active.

In many individual cells and across networks in the optic tectum, the researchers found that neural activity barely budged when tadpoles saw, heard or felt a subtle stimulus individually, but it jumped tremendously when subtle stimuli were simultaneous. The “inverse effectiveness” apparent in the swim speed behavior had a clear correlate in the response of brain cells and networks that process the senses.

The key question was how that inverse effectiveness works. The team had two molecular suspects in mind: a receptor for the neurotransmitter GABA or a specific type of glutamate receptor called NMDA. In experiments, they used chemicals to block receptors for either. They found the blocking GABA didn’t affect inverse effectiveness but that blocking NMDA made a significant difference.

NMDA’s role makes sense because it is already known to matter in detecting coincidence, for instance when the spiny dendrites of a neuron receive simultaneous signals from other neurons. Truszkowski said the study shows that NMDA is crucial for inverse effectiveness in MSI, though it might not be the only receptor at work.

Developing the senses

The research is part of a larger study of multisensory integration in Aizenman’s lab. Last year, as part of the same investigation, the researchers found that developing tadpole brains refine their judgment of whether stimuli are truly simultaneous as they progressively change the balance of excitation and inhibition among neurons in the optic tectum.

Aizenman’s lab seeks to understand how perception develops early in life, not only as a matter of basic science but also because it could provide insights into human disorders in which sensory processing develops abnormally (or differently!), as in some forms of autism.


Public Schools / Jr. Prisons?

American social institutions that “serve” children sanction child abuse. 

What is incredible is that we have to pass laws that make institutional child abuse illegal, otherwise it’s assumed by those that care for and teach children that abuse is okay.


In 2009, after completing its nationwide investigation into the use of restraint and seclusion in public schools, the U.S. Government Accountability Office (GAO) released its report on Selected Cases of Death and Abuse at Public and Private Schools and Treatment Centers. The investigation concluded that there were “no federal laws restricting the use of seclusion and restraints in public and private schools and widely divergent laws at the state level.” It also stated that there were “hundreds of cases of alleged abuse and death related to the use of these methods on school children during the past two decades.”

The report gave examples of these cases, including a seven-year-old purportedly dying after being held face down for hours by school staff, five-year-olds allegedly being tied to chairs with bungee cords and duct tape by their teacher and suffering broken arms and bloody noses, and a 13-year-old reportedly hanging himself in a seclusion room after prolonged confinement.

In terms of special needs children, the report found that those with disabilities are reportedly being restrained and secluded in public and private schools and other facilities, sometimes resulting in injury and death. The 10 closed cases examined by the GAO revealed that children with disabilities were sometimes restrained and secluded even when they did not appear to be physically aggressive and their parents did not give consent. They also were restrained facedown or using other methods of restraint that block air to the lungs, which can be deadly. The teachers and staff in these cases were often not trained in the use of restraints and techniques, and they continue to be employed as educators. So if they are trained, it’s okay?

Even worse, the reasoning behind the use of these practices makes every student vulnerable, not just those with special needs. Whether it is for convenience, discipline, or is, as some would claim, “therapy,” the misuse of restraint especially has become a dangerous standard of practice without any evidence to back it. According to the Alliance to Prevent Restraint, Aversive Interventions & Seclusion (APRAIS), research shows that aversive interventions, restraint, and seclusion carry no therapeutic value, and as we’ve seen, can compromise health and safety.

Let’s Just Call It Abuse


Who knew that torture subjects included American school children?

There’s nothing more frustrating in the advocacy world than opposing interpretations of the same word. Solutions are much more difficult to come by if one side feels there is no problem in the first place. We of course see this with the very label of our community—autism. Many in the community interpret autism, or being autistic, as a privilege, a gift; even something you are rather than something you have. Others see it as a “have” condition of pain, discomfort, limitation and risk.

Similar multiple meanings apply to restraint. Many educators believe restraints are used to maintain the safety and order of the classroom and students, while those who oppose their use believe they are dangerous to the physical and mental health of children, and may result in death. While restraint may be used for instances when immediate danger threatens any individual, its misuse for the purpose of controlling behavior, disciplining, or asserting authority should be called something else entirely.

Seclusion rooms—recently referred to as “scream rooms”—are not only harmful, they defeat the entire purpose of inclusivity. Unlike restraint, which has the “imminent danger” exception to the rule, forcing a child into an empty room, closet, stall or cage has no exception. “Aversive intervention,” which is a preferable and friendlier term over abuse or torture, encompasses restraint and seclusion, but also covers the use of random disciplinary actions ranging from force-feeding and forced exercise, to duct-taping and verbal assault. At a minimum, these practices cause trauma and regression in children with autism and are quite simply abusive.

The Dangers

It is estimated that more than 200 students, many with disabilities, have died due to seclusion and restraint practices being used in schools over the last five years. While restraining someone against their will is typically considered a crime, its continued allowable misuse in schools can cause postural asphyxia, unintended strangulation, death due to choking or vomiting and being unable to clear the airway, death due to inability to escape in the event of fire or other disaster, cutting off of blood circulation by restraints, and nerve damage by restraints. Other dangers include post-traumatic stress disorder, heart, gastrointestinal and pulmonary complications, decreased appetite and malnutrition, dehydration, urinary tract infections, incontinence, anxiety and agitation, depression, loss of dignity, sleeping problems, increased phobias and increased aggression, including SIB (self-injurious behavior).

As one advocate recently pointed out, the use of these practices could also increase a child’s tendency to run or elope, behaviors that have their own set of risks. According to my organization, the National Autism Association (NAA), at least 80 individuals with autism were reported missing between September 2011 and February 2012 following elopement. Of those, 25 percent were students who left school grounds.

A Long Overdue Federal Bill

The good news is that federal legislation has recently been introduced to protect students from these dangerous practices. The Keeping All Students Safe Act, introduced by Senator Tom Harkin (D-IA), would provide protections to students across the country by prohibiting interventions that compromise health and safety. It would require that schools conduct a debriefing with parents and staff after a restraint is used, as well as plan for positive behavioral interventions that will prevent the use of restraints with the student in the future. It also would prohibit:

  • Aversive behavioral interventions that compromise health and safety.
  • Physical restraint that is life threatening, including physical restraint that restricts breathing.
  • Physical restraint if contraindicated based on the student’s disability, healthcare needs, or medical or psychiatric condition.
  • The use of seclusions and/or restraints in a student’s Individual Education Plan or any other behavioral plan.
  • Seclusion in locked and unattended rooms or enclosures.

What an improvement: teachers and staff will be required to tell parents the methods that were used to abuse their child.

See related posts on notorious child abusers Dr. Bruno Bettelheim (left) and Dr. Matthew Israel (right).


See also:

A Violent Education

Corporal Punishment of Children in US Public Schools


Intuition is a matter of trust, but in what?/ Re-Post

My morning cognitive ritual is a matter of sorting through mail from the unconscious. What news is there from sleep, from the dream dimension? I wake abruptly – no gentle transition: I pop awake like one of those red and white fishing bobbers, which has been briefly pulled underwater by a snag – or was it a fish toying with the bait? Up it pops, righting itself on the surface. Rarely can I go back to sleep, but jump out of bed and head straight for the kitchen and coffee. I revive the computer and quickly begin writing: the fish are biting.

Words begin to arrive, almost telegraphically, and a message unfolds. Can an Asperger write interestingly, when we are supposedly unable to venture beyond stilted and boring language? One likens intuition to mail or telegrams – words, but the other, deeper source is visual: the pond or lake. And fish as food, creative nourishment, and carriers of messages from the depths.


Painting by Balletstar

A visual that I used when I was a child, and into young adulthood, was a compass, or the Arrow. My Arrow. It didn’t really have much to do with navigating the environment like a magnetic compass or GPS. The Arrow was a built-in “clue pointer” that led me on, as if life is a treasure hunt. The Arrow created boundaries that let me know what actions were proper to me; a restriction, yes – but simultaneously it served as a pathfinder that allowed for discovery and experimentation. And I can identify that the episodes in my life that have taken me “off the correct path” coincided with The Arrow having  disappeared; it was terrifying. Did I somehow loose my connection to the Arrow, or had it been a silly conceit?

Of course, as an adult, I’ve had “reasoning functions” available in addition to intuition, but often reason has yielded poor results; reason doesn’t take one very far in modern social contexts, and indeed, seems to make matters worse.

Reason needs a reasonable partner – healthy human interaction requires a “reasonableness” that sets aside impulses such as control, anger, aggressiveness, and selfish intent, dressed up in insincere language.  

“Empathy” (so touted by psychologists as “missing” in Asperger individuals) is different in Asperger types, and has everything to do with reasonableness as “the doorway” into sympathetic interaction with another.

I think also that we “sense” and intuit human states of mind, which curiously is “painful” – this pain is energy depleting, disturbing, and drives withdrawal; in a secure and quiet place we may then begin to contemplate the “other person’s” particular circumstances and reactions (emotions) to events in his or her life. This can’t be rushed, and unfortunately, in the fleeting and superficial social world, immediate emotional gestures are required with claims of “developmental disability” thrown at those who need time for a considered response. Immediate “emotional” responses are dictated; social humans are not free to be themselves.

In the inevitable “What have I learned?” department peculiar to modern society, I would say that I trusted intuition blindly when young and thought it (the Arrow) had failed me when things went wrong, and blamed myself. Why didn’t it work anymore? Where had it gone and was it my “fault” that I’d lost it? What I didn’t understand was that I had “grown up” and was tasked with negotiating The Real World with the tools I had (logic, patterns, strategies and logistics; critical analysis) which simply had little to no application to surviving modern social contexts, except when applied directly to projects on my desk. Intuition was still “there” but my “gut feelings” also transgressed the Social Order. A large number of the posts in this blog address why – The Social Order is harmful to natural human beings. Activity driven by a hierarchy of privilege in wealth, dishonesty, exploitation, cruelty, injustice and the disposability of living beings goes against everything I intuit as necessary to human fulfillment.


Did Einstein originate all the quotes attributed to him?

Modern social humans have set up a false either / or relationship between intuition and reason – and in the extreme, have made the two into enemies.