Asperger Individuals Accused of being Old-Fashioned

Re-Post from 2015

To say that man is a social animal is to say that he never lives in a world of his own choosing.

Gresham Sykes


Not too long ago I was contacted by a friend from high school via a Facebook account I thought I had closed. Like some world-devouring alien monster, Facebook never lets you go. I faced a dilemma in answering her question: Where have you been all these years?

I actually had little to say: as a concrete visual thinker, a stream of pictures flew through my mind, but it’s impossible to verbalize these fragments would be like trying to restore an ancient mosaic floor in a ruined Roman villa including all the human events that it had supported. As the restorer of my life I know what the mosaic  portrays, perhaps Poseidon sinking a ship, with playful dolphins rescuing the drowning crew, but an onlooker sees nothing but disconnected patches that may suggest waves or the eye of a gigantic octopus.

My friend remembered me as I was in high school, several life changes ago.

This “social” event brought on confusion: she remembered who I was better than I do. Anecdotes piled on anecdotes: I’d obviously spent a lot of time with her, but remembered none of it. Apparently I’d eaten her mother’s Norwegian cooking; we went to parties, studied for the SAT, wore bizarre clothing, gone to the beach, the city, flirted with boys, all the normal things kids do. All I could do was respond periodically, “Oh, yeah, that was fun” or “I can’t believe we did that.”

fb hs

Who the Hell was I?

What I remember of high school is quite different: high anxiety, panic attacks, phobias and interludes of OCD and depression, and then, in late sophomore year, the person she remembers appeared: mild mania took over as if “the gods” had taken pity on me and concocted a dazzling new persona to replace my previously unhappy existence. I suspect that emerging bipolar symptoms were the consequence of enduring high levels of stress during childhood, including a severe allergy to social indoctrination. Asperger’s disorder wasn’t on anyone’s radar way back then. No one noticed me at all, except when my behavior made me socially visible. Attention was negative, whether or not it was excellent grades at school or a bout of panic, anxiety or sensory overload. I learned to avoid attention; many Asperger children likely feel the same. In a backhanded way this lack of help with my particular personality required that I learn to help myself, a big asset in an increasingly chaotic and dysfunctional America. Developing adult behavior, which is deemed “defective” by psychologists, was actually an asset. (Wow! Is that an indictment of American child-rearing?)

Many social children want attention so badly that they will “behave badly” in order to have someone notice them. After all, who does our society pay attention to? Psychopaths, sociopaths, serial killers, the morally bankrupt, the illegally rich and narcissistic parasites. Liars. And these predators rarely suffer consequences unless they are poor and minorities.

We live in a claustrophobic society that offers few options for creating an authentic individual. Children are competing for attention with inhabitants of a supernatural reality billed as “entertainment” but which is pervasive: a secular religion with its roots in the Old Testament; whether monstrous male machines that murder everyone in sight, or equally perverted predatory females who are perpetually “turned on” by sexual aggression. These scenarios are concocted by the vast resources of corporations; the media, social media, the film and fashion industries, the advertising industry and gaming. A destructive system has emerged in which the attention that every child needs from its parents, and adults in the environment, has become the lure that inverts the normal process.  Adults are thoroughly engaged by violent “entertainment” and children are parented by supernatural characters on the digital screen, large or small. American children are taught “normal and acceptable” behavior and morality (the utter lack of) by imaginary beings that demonstrate the worst behavior imaginable.

The refusal on the part of adults to impart to growing children moral competence, life skills, self-security, and most of all, awareness of the difference between material objects and real, live human beings, is astounding. The treatment of human beings as objects that may be abused, tortured and murdered, merely to fulfill a sick pleasure that drives seekers of power, is rightfully pornography. Children are taught that the natural need for love and guidance can be fulfilled by violence, domination, cruelty, bullying and that the pain and anger that results from this betrayal can be solved by mass murder.

Asperger children are “accused” of being “too adult and old-fashioned” – which reveals  prejudice toward children who reject THE LIE that American culture is  based on equal treatment, fair dealings, and truthful communication. After all, these are requirements for democratic government, which the United States abandoned long ago, if it ever was a democratic society, which would be difficult to demonstrate.

Things that Bug Me / Mostly Social Language

You might imagine this could be a long list, but this morning it’s a few things that people might think are ridiculous. Updated from last April.


1. FAKE CHEWING in TV commercials: An actor picks up a big fat burger, holds it in a position that hides his mouth, pretends to take a bite, lowers the burger and pretends to be  chewing, but his facial motions look utterly wrong because the actor has nothing in his mouth. I know why the people who make commercials have the actors do this; they don’t want to ruin those perfect burgers concocted by a food stylist; they don’t want actors to have to swallow or spit out food during what may be dozens of “takes” or ruin the set or their make up. It just LOOKS so fake.

2. FAKE EATING in TV shows: A group of actors is sitting around a table supposedly eating a meal. One character is doing most of the talking, and the others must look interested and pretend to be eating, so they play with their food, stirring or poking it with a fork. Some actors are really bad at this and repeatedly stab their food. It looks like they’re angry. Maybe they are.

3. EATING without the utensil touching the lips: This a variation in which the actor pokes at her food and then inserts a fork or spoon in her mouth, but curls back her lips so they don’t touch the utensil. I guess it’s so her lipstick isn’t smudged, but it’s creepy.

4. DRINKING CUPS: This is a subset of fake eating. For good reason, the paper or foam cups used in TV commercials are empty; 16 oz. of water or other liquid would be a hazard on a set. But when an actor picks up the cup, he or she applies the force needed to lift a full cup;  it jerks upward too fast; it’s so obvious. Why not put something non-liquid in the cup that weighs the same as a liquid would?

Strange language usage bugs a lot of people, but here’s one that really bugs me.

LIKE or SUCH AS: A pair of archaeologists are contemplating a clay pot they have dug from a Paleo-Indian site, stating that the only cooked foods for which they have found evidence are beans and corn. They go on and on about the pot and declare that it’s the perfect technology for cooking things LIKE beans and corn. What is like beans and corn? Shoes and socks? A Ford and a Chevy? This strange language plagues modern culture. Things that are what they are constantly referred to as ‘like themselves.’ If what is meant is that beans and corn can be cooked in this pot, but also other food, then use SUCH AS: It’s the perfect technology for cooking foods such as beans and corn. Otherwise, just say, “It’s the perfect technology for cooking beans and corn.”


AKA: “Something like a flood.”

TV news is rife with this stuff: What is something “like a war” “like a flood” “like an explosion” “like an epidemic” “like a Chanel handbag” – which only makes sense if you’re selling a fake.


 Adjective abuse drives me batty.

I’m not rigid about casual language, but the use of adjectives is out of control:

INNOCENT VICTIMS: The rule of thumb is that unless you would use GUILTY VICTIMS, don’t use innocent. This applies to common exaggeration and escalating misrepresentation of reality. The problem of escalation is this: words and phrases loose meaning when adjectives are used competitively.  MY TRAGEDY is superlative to yours. This shows up in descriptions of loss that are absolute in “supernatural” terms. Whether or not the person who lost his or her life was 3 years old or 80, was a habitual criminal, an ordinary person, or exceptional in some way; whether the person died by neglect, directed violence, accident or natural cause, that individual was in life a paragon of virtue, love, generosity; spread joy to an entire community, practiced universal compassion and transformed people’s lives with a smile. Many of the people making this claim then admit that they didn’t know the deceased beyond saying “Hello” occasionally.

My reaction to this inflation of the “value” of the deceased is not flippant nor dismissive: the point is that “death” has become a social competition for the family and friends of the person who died, especially if a reporter and cameraman appear. This is incomprehensible to an Asperger, for whom every person counts; being honest about the character of your loved one does not diminish the loss. Believe it or not, until recently one could say, “He was a drunk SOB, but I loved him, and will remember the bad times and the good.”

The use of “BACK” in conjunction with a word that already means back. Hint: “RE-” means back. He returned to his job as an editor. (That’s it: no need for back.) I’m not complaining about average people: this is a mistake cultivated by writers, journalists, TV reporters and news readers.

GOING FORWARD: Unless you intend to “GO BACKWARD” please stop saying this!

The use of “O” – a letter of the alphabet – for “ZERO.” Is it that hard to remember that the former is a letter and the latter is a number?


Memoir / Impossible for an Asperger?

A photo that tells me of a time when my inner and outer experiences were undivided.

A photo that tells me of a time when my inner and outer experiences were undivided.


Rethinking a Life

A few thoughts on writing memoir:

How much talent is lost because society doesn’t like the package it comes in? The individual is a rare creature: she is self-made and not designated by a political system. A political statement of rights does not make one an individual: those rights are defined and held in check by the society that grants them. As a young woman who not only wanted to achieve financial equality and independence, I was told that the ticket to “getting in” to the system was to adopt the very structure that denied opportunity to women. I was also horrified to learn that I was expected to drop my gender at the door, as well as my personality, values, individual potential, and most surprisingly, talents that might benefit an employer. Another shock: I learned that this defacement is what men have been required to accept for centuries.

Individuality is a function of personal qualities that are cast against the vast historical canvas of culture, and in many ways the individual exists in opposition to that picture. Identity is a package prepared by generations of ancestors, as well as the living family, long before a child is born. Father and mother shape a child’s beliefs, behavior, and future. The larger society sets the rules of membership, which can be extremely harsh. The individual is born and dies when each of us is assigned a role dictated by ideas, prescriptions, and absolutes that the individual has no part in creating. The result is that when one looks into a mirror, a shadow feeling haunts the body: that is not any face; it is my face, unique in all the world. Why then, do I not know myself?

To write a memoir is to tear oneself loose from social conformity and to declare that one’s life is not the same as any other life, regardless of how similar human lives are. A modern trend in autobiography has lead people to think that the writer has amazing secrets to reveal, and that he or she will do just that; why else would one write a book about a life that has yet to be concluded? A memoir is expected to erase the public person, to replace the mask with a livelier, racier, and more interesting person. Family members and close friends are expected to be shocked by revelations, and will claim that they did not know of secret individual choices on the part of someone they thought they knew. The public loves it when a willful individual goes bad. Confession and repentance, in the form of a serial memoir (the trek from talk show to talk show) or a best-selling book, return the stray reprobate to the group. In this sense, a memoir is a religious document.

What then, is an individual? The test is simple: Only an individual can care about the welfare of other individuals. The group, by definition, cannot. The group survives by enforcing conformity and does not recognize that each person has a valid interior life, only that inner lives are suspect. The group pressures and grooms the individual to vanish into a pre-assigned role: the American idea of an individual is someone who utterly conforms to social norms, but does something like skydive with a pet dog, or paint the bedroom orange, or pick a cartoon graphic for a credit card that claims, “I’m whacky! I’m crazy! I’m creative.”

From the point of view of a person with a mental illness, this is ludicrous: normal  people have no idea what crazy is. Any deviation from the suffocating religious-patriotic complex of American belief, including what have been called criminal acts, is increasingly regarded as mental deviation and not diversity. To be diagnosed as having a brain disorder automatically puts one outside of society, forcing one to embrace a strange, dangerous, and unsought individuality. The memoir of such a person cannot be separated from this predicament, which can be described as the terror and the mystery of the conventional.

A social view of life as a program to be fulfilled, and only completed at death.
“No one ought to be said to be happy, until death and the last funeral rights.” Ovid



Anthropomorphism in science / Julian Davies

10.1038/embor.2010.143 (click for article)

Anthropomorphism in science

Julian Davies1


It is a common characteristic of our species to assign human emotion and behaviour to other creatures and even inanimate objects—just ask any car owner. Common examples of such anthropomorphisms involve animals and pets, especially dogs and cats. These domesticated species are sometimes considered to behave like us and think like us, becoming pseudo-human.

One might think that such sentimental anthropomorphism would be unlikely to spill over into the biological sciences, but this is not so. Microbiology seems particularly susceptible and the literature is littered with examples of bacteria having to ‘make a choice to use a particular substrate’ or a ‘decision to make a compound’ and even ‘needing something’. When bacterial conjugation was discovered in the 1950s, bacteria were even classified as males and females participating in sexual mating. I am sure that many of you will be able to come up with examples from other fields.

I would argue that in a number of instances anthropomorphic thinking has misdirected biological enquiry. It is often assumed that microbes in their natural environments are in a constant war of attrition for space and nutrients. Many publications speak of battlefields and the production of chemical weapons to permit one or more organisms to successfully exploit a particular environment. Does ascribing human militaristic means and ends to bacteria make sense? There is enormous diversity in microbial phyla and the biosphere is an extraordinarily complex collection of distinct organisms. A given soil sample might contain 109 microbes per gram with a thousand or more species living happily together (an anthropomorphic statement if ever there was one). In the human gut, microbes number many trillions with upwards of 1,000 phylotypes; are they all engaged in lethal conflict with each other? Despite the fact that small molecules with antibiotic activity can be isolated from gut bacteria grown in the laboratory, there is no in situ evidence that they actually play such roles in the intestinal tract; it is equally likely that these molecules are mediating interactions with mucosal cells lining the gut. However, our ignorance of the workings of microbial communities in these environments is profound and remains tainted by our anthropomorphism.

Animism, Anthropomorphism, Zoomorphism, Therianthropy

General definitions:

noun: animism / The attribution of a soul to plants, inanimate objects, and natural phenomena. The belief in a supernatural power that organizes and animates the material universe.

I believe that the understanding of animism is incorrect, and that the common definition is itself an act of anthropomorphism. Modern conceptions such as “soul” reference human “specialness” and division from nature. “Supernatural power” references a mind-state created by humans (word concepts) that stands outside and above natural law and overrides nature’s ways. These are divisive concepts that do not reflect the profound continuity of early humans with the environment and everything in it. “Spirit” is existence  in all its permutations; humans are but one expression of interchangeable matter and energy. “Shape-shifting” is the belief in the ability (of certain people) to connect to other forms through this underlying energy. See Therianthropy.


noun: anthropomorphism / The attribution of human characteristics or behavior to a god, animal, or object.

Anthropomorphism is extremely popular and effective in advertising.

Anthropomorphism is extremely popular and effective in advertising.

This “mistake” of attribution of human consciousness, will, and intent onto the entire content of the universe is all-encompassing, and prevents understanding of nonhuman reality.




noun: zoomorphism / Zoomorphism is the attribution of animal characteristics to humans. This is the opposite of anthropomorphism.

Zoomorphism characterizes much of mythology and persists today. It doesn’t get much notice, nor is it recognized for what is and how ancient it is.  Note that Batman doesn’t become a bat – he’s a man who wears a costume that endows him with “night time” powers.   imagesDF9NP5JR


noun: therianthropy

Therianthropy is the ability of human beings to metamorphose into animals by means of shapeshifting; example – werewolves. It is also the depiction of gods or humans as part or all animal.

As if real wolves aren't scary enough!

As if real wolves aren’t scary enough!

untitledroman anubis

Romanized Egyptian god Anubis: jackal head.


Ancestor worship is a different and practical activity:

The Greek Pantheon: a highly volatile family, almost indistinguishable from "normal" humans.

The Greek Pantheon: a highly volatile family, almost indistinguishable from “normal” humans.













Butchering the Human Body / A Social Activity


Mother-Daughter plastic surgery. Unfit mother? Mentally disturbed mother? Or “normal” neurotypical mother?

All the money in the world can’t make a person sane, but having money can make the fault lines in neurotypical human character painfully obvious. These “hatchet jobs” reveal a supernatural dimension in the brain created by psychological neoteny; a vision of human physical “perfection” that is the corollary of human behavioral distortion and moral rigidity. These faces are the hatred and fear of physical reality made visible. Birth, growth, aging and death are human destiny.

Self-butchery is pathological narcissism (childish self-centeredness) at the extreme. This is the horror of social reality. This is social distortion at the top of the  pyramid, which “peasants” are urged to copy.

mj_split_200x150-52bc7ebf6404a8b07a83253d13eb858ba30baea7-s6-c30 michaela-romanini-plastic-surgery-before-and-after Joan-Rivers-worst-plastic-surgery-pictures imagesQP0URJDM 10-worst-plastic-surgery-disasters7imagesBSTNKX16 8b8e4d9e339668b519eb556798992280 20-worst-cases-of-celebrity-plastic-surgery-gone-wrong-4


The British Psychological Society / One brain – two visual systems

I’m looking for graphics to add – this is just about incomprehensible to a “visual learner”

scientificamericanmind1208-20-I3 Picture%205

Left: New labeling, maybe?  Right: Old labeling (1960s) but still in use.

One brain – two visual systems / Mel Goodale and David Milner outline their research.

Terms: agnosia – inability to interpret sensations and hence to recognize things, typically as a result of brain damage; ataxia: the loss of full control of bodily movements; saccadic (French for jerk) quick, simultaneous movement of both eyes between two phases of fixation in the same direction. The phenomenon can be associated with a shift in frequency of an emitted signal or a movement of a body part or device.

cat (2)

Why would anyone think we have two visual systems? After all, we have only one pair of eyes – and clearly we have only one indivisible visual experience of the world. Surely it would be more sensible to assume, as most scientists throughout the history of visual science have assumed, that we have only one visual system. But of course, what seems obvious is not always correct.

The assumption of a single visual system began to be challenged in the late 1960s and early 1970s. According to one influential account (Schneider, 1967), the more ancient subcortical visual system (in particular, the superior colliculus) enables animals to localise objects whereas the newer cortical visual system allows them to identify those objects. The time was certainly right for such ideas and a number of other related schemes were put forward, coming from a variety of experimental traditions (Trevarthen, 1968; Held, 1970; Ingle, 1973). These ideas were revolutionary at the time and many investigators, including ourselves, were inspired by the notion of two distinct visual pathways, each with a different job description. Indeed, our first collaborative work together in the 1970s was an attempt to specify more precisely the role of the superior colliculus in guiding different kinds of motor behaviour in rodents (Goodale & Milner, 1982).

But the eye does not just send input to the superior colliculus and the visual cortex. In fact, messages are sent to at least 10 different target brain areas, each of which appears to be involved in the control of its own separate class of behaviours. For example, whereas the superior colliculus has been shown to be involved in guiding eye and head movements towards potentially important visual events or objects, another subcortical structure, the pretectum, plays a crucial role in guiding animals around potential obstacles as they move around their environment. Indeed, lower vertebrates, from frogs to gerbils, show good evidence for the existence of several independent and parallel visuomotor pathways (Goodale, 1983).
But what about primates?

The breakthrough arose from several studies in the monkey by Leslie Ungerleider and Mortimer Mishkin (1982), which led them to retain the earlier distinction between localisation and identification but to move it entirely into the cerebral cortex. From then on, the distinction between cortical and subcortical visual pathways that had been so much the rage during the previous decade fell out of fashion, and suddenly the phrase Ungerleider and Mishkin used to describe the division of labour between their two cortical systems – ‘what versus where’ – began to fall from every psychology student’s lips.

According to Ungerleider and Mishkin’s scheme, the ventral system, passing from primary visual cortex (V1) to the inferior temporal lobe, is concerned with object identification, while the dorsal system, passing from V1 to the posterior parietal lobe, is charged with object localisation. Thus, these two essential and complementary aspects of visual perception were allocated to separate processing ‘streams’.

Throughout the 1980s and 1990s, the existence of these two streams in the monkey brain was amply confirmed, and several new visual areas belonging to one or the other stream were discovered. Nobody now disputes the existence of the ventral and dorsal visual streams. By the early 1990s, however, we and others began to question the appropriateness of the ‘what versus where’ story in capturing the functional distinction between the two cortical streams.

In December 1990, two apparently unrelated discoveries, one made by us and one made by another research group, suddenly made it clear to us that a new formulation based on a distinction between ‘what versus how’ would do a much better job of characterising the division of labour between the ventral and dorsal streams.

We had two years earlier begun a series of neuropsychological studies of a patient with severe visual agnosia, and indeed by the end of 1990 had written two articles based on this work (Milner et al., 1991; Goodale et al., 1991). Our patient, D.F., closely resembled an earlier patient Mr S., described by Benson and Greenberg (1969) as having ‘visual form agnosia’, and who had been examined in admirable detail by Efron (1968). Mr S. was characterised by a profound problem not just in recognising objects, but more fundamentally in discriminating between even quite simple geometrical shapes, such as rectangles of different aspect ratio but identical surface area. D.F. not only shared this perceptual deficit, but also, just like Mr S., had incurred her disabling brain damage from carbon monoxide poisoning (from a faulty water heater) while taking a shower.

In our two papers, we documented not only what D.F. could not do, using a range of perceptual tasks, but also explored what she could do, which turned out to be far more interesting. Even though she was very poor at describing or demonstrating the orientation of a line or slot, she could still reach out and post a card into the same slot without error. Similarly, despite being unable to report (verbally or manually) the width of a rectangular block, she would still tailor her finger-thumb grip size perfectly in advance of picking it up. In short, she could guide her movements using visual cues of which she seemed completely unaware.

While these two papers were still in press, we came across a dramatic report published by a group of Japanese neurophysiologists led by Hideo Sakata. They were following up the classic work of Mountcastle, who had discovered several different classes of neurons in the monkey’s dorsal stream, each of which was activated when the monkey performed a particular kind of visually-guided act (Mountcastle et al., 1975). Some neurons were active when the monkey reached towards a target, others when it made a saccadic eye movement to the target, and others when it pursued a moving target. Reasonably, Ungerleider and Mishkin had interpreted these results as reflecting the ‘where’ function of the dorsal stream, since the activity of the neurons appeared to be related to where in space the targets were. But another class of neurons that Mountcastle had discovered did not fit so well with their account. These neurons, which responded whenever the monkey grasped a target object with its hand, were not concerned at all with where the object was. And perhaps worse for the ‘what versus where’ account, the work by Sakata and his colleagues showed that these grasp-related neurons did concern themselves with the shape and size of the objects the monkey was grasping. In fact, they found that there were many neurons of this kind, mostly clumped together at the front end of the dorsal stream (Taira et al., 1990). What these neurons had in common with the other neurons in the dorsal stream was not their spatial properties but rather their visuomotor ones.

Putting two and two together, we realised that D.F.’s perceptual problems could have arisen from severe damage to the ventral stream, while her spared visuomotor skills could perhaps be attributed to a functionally spared dorsal stream. Things were beginning to fall into place. We could now also begin to explain the striking deficits seen in another group of patients, those with so-called optic ataxia, which in many ways is the converse of those seen in D.F. The term optic ataxia was coined by the Hungarian neurologist Rudolph Bálint (1909) in his description of a patient with bilateral damage to the parietal lobes. This patient had difficulty pointing towards or grasping objects presented to him visually, even though he had no trouble reaching out and touching parts of his body touched by the examiner. Bálint argued that his patient suffered from a visuomotor, rather than a visuospatial impairment: thus foreshadowing the debates that were to emerge much later in the century.

Bálint’s foresight was amply borne out by subsequent research, notably by Marc Jeannerod and his colleagues in the Lyon group (see Jeannerod, 1988, 1997). Optic ataxia patients not only have difficulty making spatially accurate reaches, but also are unable to rotate their wrist or pre-configure their hand posture when reaching out to grasp objects of different orientation or size. At the same time, many of them can still report where those targets are located relative to themselves and what they look like. In other words, these patients, in direct contrast to D.F., had presumably sustained damage to the neuronal hardware in the dorsal stream while still retaining an intact ventral stream.

We set out these ideas in two early theoretical papers (Goodale & Milner, 1992; Milner & Goodale, 1993). Marc Jeannerod and Yves Rossetti independently published closely similar ideas in 1993 (Jeannerod & Rossetti, 1993). We developed the model in much more detail in book form soon afterwards (Milner & Goodale, 1995).

Essentially, we see the ventral stream as supplying suitably abstract representations of the visual world, which can then not only serve to provide our immediate visual experience but also be stored for future reference. By this means the ventral stream enables the brain to create the mental furniture that allows us to think about the world, recognise and interpret subsequent visual inputs, and to plan our actions ‘off-line’.

In contrast, we see the dorsal stream as acting entirely in real time, guiding the programming and unfolding of our actions at the instant we make them; thus enabling the smooth and effective movements that allowed our primate ancestors to survive in a hostile and unpredictable world. In short, ours is a distinction between vision for perception and vision for action.

Recent developments

In the years since our 1995 book, the field of visual neuroscience has advanced rapidly. Part of the reason for publishing a new book 10 years later, Sight Unseen (Oxford University Press), was to give a retrospective view of our original ideas in the light of these more recent developments. At the same time, we wanted to bring our ideas to a wider audience.

In our own work, we had continued to study D.F. on and off over the intervening years, as well as patients with optic ataxia. Their complementary patterns of impairment and spared ability continued to impress us. One good example concerns the necessary role of the ventral stream in providing our visual memories – in other words its role in bridging a temporal gap, however short. We tested this by, presenting a visual object briefly to the subjects which was then taken away. A few seconds later the subject was asked to pick up the object as if it were still there. Remarkably, D.F. failed completely in this task. Unlike her behaviour in the normal situation of reaching to grasp a visible object, she now showed no tailoring of her finger-thumb separation at all as she reached out and pretended to pick it up (Goodale et al., 1994). Evidently she had no working memory of the object – not because her memory wasn’t working properly, but because she hadn’t consciously perceived the dimensions of the object in the first place.

This is how my “Asperger” clumsiness occurs: objects within a foot or two of my body are a problem, like a coffee cup next to the computer. I should know where it is, but upon reaching for it, my hand goes too far and knocks it over. Or if I’m reaching for something near it, it’s as if I don’t see the cup and my hand collides with it. This doesn’t happen often, but when it happens, it’s this type of location error. The odd thing is that I have really good aim when throwing an object toward a target. When I “lose” something like keys or my phone, the item is usually right under my nose. It’s as if my attention is rarely applied close to my body, but instead at a distance.

More recently, we have found that optic ataxic patients have exactly the converse problem. As expected, they show no sign of adapting their handgrip to the size of objects in ‘real time’, but when asked to perform the delayed pantomime task, they perform just like healthy subjects (Milner et al., 2001). In fact they even do this when the object is still present at the end of the delay. Evidently, once the healthy ventral stream has a chance to become involved, it tends to dominate their actions even when the patients are subsequently faced with a visible object. We verified this conjecture by secretly switching between different-sized objects during the delay on some trials – the patient’s hand opened according to the size of the previewed (remembered) object rather than the one actually present.

So if we are going to act on the basis of a visual memory, we need to use our ventral stream – the dorsal stream has no visual memory. In fact, this may be the most important job of the ventral stream; it allows us to use vision ‘off-line’, providing a bridge from the past to the present. But although the dorsal stream does not appear to have a visual memory, the visuomotor activities to which it contributes clearly do benefit from experience. In fact most of our visually guided actions are as skilled as they are precisely because of their being well-honed by practice. It seems likely that as our initially slow and painstaking efforts become more automatic, the contribution of the ventral stream retreats and is replaced by more streamlined circuitry involving the dorsal stream, related frontal cortical areas, and subcortical structures in the brainstem and cerebellum.

Seeing inside the head

The greatest advances in cognitive neuroscience over the years since we first published our ideas have come about through the development of functional MRI. This new technology has confirmed that the dorsal and ventral streams really do exist in humans – an implicit assumption that we made in our model. Functional MRI has also allowed us to test our hypothesis about what was no longer working – and what was still working well – in D.F.’s visual brain.

In collaboration with Tom James and Jody Culham at the University of Western Ontario, we started by carrying out an accurate anatomical scan of D.F.’s brain, and then used functional imaging to try to identify which visual areas were still working, and which were not. First we looked for a specific area in the ventral stream that is known to be activated when a person looks at objects, or even at line drawings or pictures of objects. To find this object-recognition area, we contrasted the pattern of brain activity that occurred when subjects looked at line drawings of real objects with the activity that occurred when they looked at scrambled versions of those same pictures. Not surprisingly, the brains of our healthy volunteers showed strong activity for the line drawings in this object-recognition area.

When we looked at D.F.’s brain, however, we could see that not only was this area severely damaged, but the remaining areas in her ventral stream showed no more activity for the line drawings than they did for the scrambled versions (James et al., 2003). In other words, she had lost her shape-recognition system. Just as we had inferred from our original testing many years ago, her brain can process lines and edges at early levels of the visual system, but it cannot put these elements together to form perceived ‘wholes’, due to the damage in her ventral stream.

A completely different story emerged when we looked for brain activation in D.F.’s dorsal stream, which we had hypothesised must be working well. When we looked at activity in her brain during a scanning session in which she was asked to reach out and grasp objects placed in different orientations, we saw lots of activity in the front part of her dorsal stream just as we did in healthy volunteers. This activity presumably reflected the normal operation of human ‘grasp’ neurons. What you see is not always what you get

The most difficult aspect of our ideas for many people to accept has been the notion that what we are consciously seeing is not what is in direct control of our visually guided actions. The idea seems to fly in the face of common sense. After all our actions are themselves (usually) voluntary, apparently under the direct control of the will; and the will seems intuitively to be governed by what we consciously experience. So when we claimed that a visual illusion of object size (the Ebbinghaus illusion) did not deceive the hand when people reached out to pick up objects that appeared to be larger or smaller than they really were (Aglioti et al., 1995), vision scientists around the world embarked on a series of experiments to prove that this could not possibly be true.

Of course, our model does not predict that actions are immune to all visual illusions. Some illusions, after all, arise so early in visual processing that they would be expected to affect both perception and action. Indeed, we have shown exactly that (Dyde & Milner, 2002). But our model does predict that some illusions, those that arise deep in the ventral stream, might well not affect visuomotor processing. Whenever this does happen, as it often does, it dramatically illustrates our claim that what we see is not necessarily what is in charge of our actions.

One recent example of this dissociation between perception and action is particularly striking. In collaboration with Richard Gregory, we used the powerful ‘hollow face’ illusion, in which knowledge of what faces look like impels observers to see the inside of a mask as if it were a normal protruding face (Kroliczak et al., 2006). Despite the fact that observers could not resist this compelling illusion, actions that they directed at the face were not fooled. Thus, when they were asked to flick off a small (‘bug-like’) target stuck on the face, they unhesitatingly reached out to the correct point in space (see picture). This striking dissociation between what you see and what you do provides a dramatic demonstration of the simultaneous engagement of two parallel visual systems – each constructing its own version of reality. Where do we go from here?

Much of our work to date has focused on the differences between the two visual streams – establishing where they go, why they are there, and how they work. This side of the story has depended crucially on evidence from patients who have suffered damage to one or the other stream. But even though studying the visual deficits and spared visual abilities in these patients has told us a great deal about the systems working in isolation, it has told us nothing about how the two systems interact. The big unanswered question for the future is how the two streams work together in all aspects of our visual life. – Professor Mel Goodale is at the University of Western Ontario. E-mail: – Professor David Milner is at the University of Durham. E-mail:

Weblinks Mel Goodale homepage: David Milner homepage: The visual brain in action:

Getting Thoughtful at the End of the Year


14 degrees below zero this morning.  Every molecule of moisture hangs frozen in the air and the sun is reflecting off the fog like slivers of fine glitter. Winter rituals, like trying to keep the plumbing pipes from freezing, have taken over from other, more vague anxieties. My house is a simple frame structure with a shallow crawl space, no insulation and leaky door frames and windows. In other parts of the U.S. it might be classified as substandard housing, or even “blighted” but here in small town Wyoming, it’s a “real” house compared to ubiquitous trailers and RVs that provide shelter. Actually, those two types of “dwellings” are far more modern than my house, and more comfortable.

Our one grocery store becomes the center of social activity in winter: everyone shows up looking like fat penguins regardless of size or shape; trucks and cars are left running in the parking lot, and calorie-rich foods dominate shopping carts. Quick exchanges serve to check up on how the winter is going (which means, Are you okay?) and those who say that escape to Arizona is coming, or a trip to the Caribbean or Mexico has been scheduled, are looked upon with awe and envy. We always experience bitter cold episodes, but it’s impossible to know when, because much of our winter is usually mild;  sunny and without snow.

February: no snow.

February: no snow.

If it’s a sunny day I’m fine. If the temperature rises above zero, the dogs, who get stir crazy just like me, pester until we’re out of the house, in the truck, and headed out of town. Again, it’s a bit of a project: without being prepared and ultra safety conscious, one could die a mile out of town. We stick to county roads where there will be other traffic occasionally and resist the overconfidence that comes with 4 WD and big grabby tires. Often the countryside delivers incredible beauty, but sometimes it’s uniformly bleak.

February: no snow.

My only contact with the “American scene” is cable TV, which I hope is not representative of the average person’s daily life. On the chance that it is, I watch in horror! It’s a source of research in a way –  how is American life being engineered by corporations, advertising and the media to generate ever more power and profit? How are social schemes being applied to regulate the behavior of peons, peasants and near-slave classes using the cruel joke of a “free” economy?

I can see why internet-delivered programming is growing (I’m headed that way) but believe it or not, many innovative technologies have not reached Wyoming: our population is too small to bother with. For this we are thankful.  It’s worth risking a broken pipe, frostbitten fingers and toes, and a skid off the road to be spared the effects of The Great American Scam. Don’t reduce stress; keep piling it on until as many people as possible are ill and depressed, and then offer “remedies” that rake in billions.

Brrrrrrrrrrrrrrr! / How the Western Colonies are Doing

Brilliant blue sky this morning, but temp is below zero. Too cold to snow: both rain and snow storms often get deflected by mountain ranges that surround our high altitude basin at 6,000 – 7,000 feet altitude, which is why this area is a desert. Our yearly precipitation is 6″ – 8″ which is similar to other “cold” deserts around the planet.

When my brain is on idle (not frequently) I try to imagine (realistically) trying to live in this area pre-technology. Few people have ever made this place home, whether or not they were Native American or American settlers; nomadic Shoshoni and Ute peoples used it as part of their resource base. Pre – 1850, abundant game, including Buffalo, Elk and Pronghorn provided good hunting. Trappers “cleaned out” beaver and other fur from stream drainages, a winter industry that required almost superhuman physical endurance. And then the Mormons arrived by handcart migration – another almost unbelievable act of migration. It’s a complex history fueled by the claims of European and American nations, who knew nothing about the region, and cared not at all what happened here, except as a tool for access to and control of lands to the west.

Fortunately, they all vanished, and the building of the Union Pacific Rail Road in the 1860s was the confirmation of southern Wyoming as “pass through” country – valuable only for fossil fuel and mining for eastern industry, and its partnership in crime (LOL) with the federal government. Like many western states, Wyoming does not own much of its land, a source of ongoing irritation between “us” and “them.” The tug of war goes both ways: access to vast public land is a privilege that few Americans have ever experienced; this “non-ownership” also keeps the population at a minimum, which furthers exclusive enjoyment of the “great nothingness” of our desert.  No Trespassing signs are rare. Personally, and perhaps selfishly, I come down on the side of “keeping people out” because where humans invade, the land is trashed and whole species vanish, and enough of that has already occurred. 


Few people live here today (about 40,000 people concentrated in two towns, in a County that covers 10,000 square miles – the size of the state of Massachusetts) We are dependent on electricity, natural gas and food that is trucked in. We do pump our water from the river, which is fed by snowmelt from the Wind River Range to the north, between us and Yellowstone. This one vital component of survival resources makes life possible.

I’m going to end my comments with this map, which makes obvious an important  liberal-conservative divide in American politics, which results in part from REAL “on the ground” differences in historical relations with the Federal-Corporate-Military Complex. Western states are not so much states as COLONIES subject to Federal control of land use and resource exploitation. This encourages what the “coastal” liberal press calls “right wing wackos” to become ever more radicalized. A quick search on the internet demonstrates the extreme comments and attitudes on both sides of the fence, and yet, this serious divide gets little notice compared to Washington’s obsessions with war and foreign policy. It could be argued that attention needs to be paid to “U.S. Foreign Policy” toward the western colonies.



Dienekes’ / European Migration “story” since Carleton Coon

This was posted on Dienekes’ blog in 2009.

Once again, the human compulsion to invent “stories” gets in the way of factual analysis; how much progress in scientific understanding is delayed by a lack of objectivity and a refusal to abandoned novel writing in the face of technical advances?  I still say that the “story-telling” side of archaeology, anthropology, and now genetics, ought to be reclassified as ART.

Migrationism strikes back

In 1939, Carleton Coon wrote the Races of Europe. In it, he used the “skulls and pots as migrations” paradigm of his times, to infer a number of Neolithic and post-Neolithic migrations into Europe. A map from the chapter on the Neolithic Invasions captures his conception of prehistory well:
This map was drawn before carbon dating had been invented. We now know much more about both the anthropology and archaeology of Europe. But, the main thrust of Coon’s prehistorical narrative can be summarizes as arrows on a map, or, prehistory as a series of invasions. The closing paragraph from the Neolithic Invasions chapter sums up this view admirably:

Five invasions, then, converging on Europe from the south and east, brought a new population to Europe during the third millennium B.C., and furnished the racial material from which living European populations are to a large extent descended.

Today, carbon dating has pushed the arrival of the Neolithic to Europe into the 7th millennium BC, but, disregarding that detail, we can see that Coon thought that modern Europeans are primarily descended from Neolithic and post-Neolithic populations: farmers, seafarers and pastoralists from the south and east.

He did think that the Upper Paleolithic population had not disappeared completely, but the name he often used to describe them was survivors, which denoted quite clearly their limited contribution to the present-day population.

Acculturation & Demic Diffusion
After WWII, the arrows on a map paradigm was no longer in fashion. The transition from the old to the new prehistory did not happen overnight, but two new intellectual fashions gained ground: acculturation and demic diffusion.
The proponents of acculturation were motivated by a reaction to the pots and skulls paradigm. To the idea that the spread of a new pottery type, or a new type of skull morphology indicated the spread of a people across the map, they countered that (i) pottery could be exchanged, copied, and traded without the movement of people, and (ii) that conclusions based on typological old-style anthropology were unsupportable, and the limitless malleability of the human skull was affirmed.
In some respects, the acculturation hypothesis represented a valid response to the excesses of the pots and skulls tradition. But, they went a bit too far in presenting a picture of complete stasis, in which European people, seemingly fixed to the ground, participated only in “networks of exchange”, only ideas and goods flowed, and all differences in physical type across long time spans were ascribed invariably to responses (genetic or plastic) to new technologies, but almost never to the introduction of a new population element.
Demic diffusion is not as extreme as the pure acculturation hypothesis, but it replaces the model of invasions and migrations represented by arrows with a purposeless random walk. Demic diffusion has been argued on both archaeological and genetic grounds.

When Cavalli-Sforza and colleagues collected genetic data on modern Europeans, and subjected them to principal components’ analysis made possible by modern computers, they discovered that the first principal component of genetic variation was oriented on a southeast-northwest axis.
At roughly the same time, the widespread dating of Neolithic sites across Europe proved that there was a fairly regular advent of farming, with the earlier sites found in Greece, and the latest ones in the Atlantic fringe and northern Europe.
Demic diffusion was summoned to explain these phenomena. Neolithic farmers, the story goes, did not particularly want to colonize Europe. Europe was colonized as a side-effect of a random process in which farmers moved away from their parent’s home, while their population numbers grew due to the increased productivity of the farming economy.
The process was not seen as one of population replacement, however. Rather, it was seen as a slow movement of a wave of advance, in which farmers mixed with hunter-gatherers, and some of them moved on to populate new lands beyond the farmer-hunter frontier. The model predicted that the technology would spread without large-scale population replacement, as the hunters’ genes would make a substantial contribution to farmers’ gene pools at the furthest end of their expansion.
The Paleolithic Europeans make a comeback
Bryan Sykes’ The Seven Daughters of Eve was a popular treatment of a new wave of acculturation-minded scholarship whose more formal expression was the masterful Tracing European Founder Lineages in the Near Eastern mtDNA Pool by Martin Richards et al.
Whereas Cavalli-Sforza and his colleagues had looked at dozens of polymorphisms, their synthetic PC maps of Europe didn’t come with dates or easy explanations. The observed clines in Europe may have been due to Paleolithic, Neolithic, or even recent historical events. While they were consistent with the Neolithic demic diffusion hypothesis, the possibility existed that they may have been formed either earlier, or later than the Neolithic.
The new approach by Sykes, Richards, and their colleagues, looked at just mtDNA, but due to its being inherited from mother to daughter without recombination, they could (i) estimate the age of the common ancestors of the “European mothers”, (ii) study the patterns of geographical distribution of their descendants to infer when and where they may have lived. Hence, the various stories about Katrine, Ulrike, Helena, etc. in Sykes’s book.
The conclusions of the new methodology were clear (at least to the authors’ satisfaction):

This robustness to differing criteria for the exclusion of back-migration and recurrent mutation suggests that the Neolithic contribution to the extant mtDNA pool is probably on the order of 10%–20% overall. Our regional analyses support this, with values of 20% for southeastern, central, northwestern, and northeastern Europe. The principal clusters involved seem to have been most of J, T1, and U3, with a possible H component. This would suggest that the early-Neolithic LBK expansions through central Europe did indeed include a substantial demic component, as has been proposed both by archaeologists and by geneticists (Ammerman and Cavalli-Sforza 1984; Sokal et al. Sokal et al., 1991 RR Sokal, NL Ogden and C Wilson, Genetic evidence for the spread of agriculture in Europe by demic diffusion, Nature 351 (1991), pp. 143–144.1991). Incoming lineages, at least on the maternal side, were nevertheless in the minority, in comparison with indigenous Mesolithic lineages whose bearers adopted the new way of life.

The picture of continuity since the Paleolithic was further supported in the much briefer article by Semino et al. (pdf) on The genetic legacy of Paleolithic Homo sapiens sapiens in extant Europeans: a Y chromosome perspective. This study, based mostly on the observation of rough congruences of the European map with some Y-chromosome markers set the stage for most Y-chromosome work in Europe for the next decade.

In today’s terminology, this paper suggested that, like mtDNA, most European Y-chromosomes were Paleolithic in origin, and belonged in haplogroups R1b, R1a, and I which repopulated Europe from refugia in Iberia, the Ukraine, and the Balkans, after the last glaciation. To this set were added Neolithic immigrants from the Middle East bearing haplogroups J, G, and E1b1b, and Northern Asian immigrants from the east bearing haplogroup N1c.
Unfortunately, we do not have Y-chromosome data of Paleolithic age to determine the veracity of this scenario. Given present-day distributions, we can be fairly certain of a European origin (but when?) of haplogroup I, of a non-European origin of haplogroup E1b1b (via North Africa or the Middle East), and of N1c. A non-European origin of the entire haplogroups J and G in West Asia also seems quite probable.
The house of cards collapses
The beauty of science is that new data can always falsify cozy and plausible scientific theories. In the case of European prehistory, this occurred due to a combination of craniometric, archaeological, and mtDNA data.
Pinhasi and von Cramon-Taubadel (2009) examined skulls from the early Central European Neolithic (Linearbandkeramik) and found them to be closer to Neolithic skulls from Balkans and West Asia, rather than the per-farming Mesolithic populations.

Our results demonstrate that the craniometric data fit a model of continuous dispersal of people (and their genes) from Southwest Asia to Europe significantly better than a null model of cultural diffusion.

The authors correctly identified their data as rejecting cultural diffusion, but their conclusion that they supported demic diffusion was not warranted as there was really no evidence that Neolithic groups were “transformed” by gradual slow admixture with hunter-gatherers in their march into Europe. Their data could just as easily be explained by plain migration.
Archaeologists also made a strong case for a rapid diffusion of the Neolithic in the Mediterranean. Neolithic settlements appeared suddenly, fully-formed, occupied regions abandoned by Mesolithic peoples, and spread not slowly, in a wave of advance, but rapidly, as a full-fledged colonization:

Thus it appears that none of the earlier models for Neolithic emergence in the Mediterranean accurately or adequately frame the transition. Clearly there was a movement of people westward out of the Near East all of the way to the Atlantic shores of the Iberian Peninsula. But this demic expansion did not follow the slow and steady, all encompassing pace of expansion predicted by the wave and advance model. Instead the rate of dispersal varied, with Neolithic colonists taking 2,000 years to move from Cyprus to the Aegean, another 500 to reach Italy, and then only 500–600 years to travel the much greater distance from Italy to the Atlantic (52).

In a different study Vanmontfort et al. studied the geographical distribution of farmers and hunter-gatherers during first contact in Central Europe. This contact did not involve either adoption of farming by hunter-gatherers (as in the acculturation hypothesis), or admixture with hunter-gatherers (as in the demic diffusion/wave of advance model). Rather, agriculturalists and hunter-gatherers tended to avoid each other for 1,000 years after first contact!

To conclude, the following model can be put forward. During the 6th Millennium cal BC, major parts of the loess region are exploited by a low density of hunter–gatherers. The LBK communities settle at arrival in locations fitting their preferred physical characteristics, but void of hunter–gatherer activity. Evidently, multiple processes and contact situations may have occurred simultaneously, but in general the arrival of the LBK did not attract hunter–gatherer hunting activity. Their presence rather restrained native activity to regions located farther away from the newly constructed settlements or triggered fundamental changes in the socio-economic organisation and activity of local hunter–gatherers. Evidence for the subsequent step in the transition dates to approximately one millennium later (Crombé and Vanmontfort, 2007; Vanmontfort, 2007).

The “Paleolithic” case won a short-lived victory when Haak et al tested mtDNA from early Central European farmers, discovering that they had a high frequency of haplogroup N1a which is rare in modern Europeans. This finding was interpreted as evidence that the incoming Neolithic farmers were few in numbers and were absorbed with barely a trace by the surrounding Mesolithic populations who adopted agriculture. Acculturation seemed to have won the day! The case was, however, tentative, and hinged on the assumption that the Paleolithic Europeans -who had not been tested yet- would have a gene pool similar to that of modern Europeans.

When hunter-gatherer mtDNA was tested in both Scandinavia (by Malmström et al) and Central/Eastern Europe (by Bramanti et al.), it turned out that continuity from the Paleolithic was rejected. Hunter-gatherers were dominated by mtDNA haplogroup U, and subgroups U4/U5 in particular. None of the other lineages postulated by Sykes et al. as being “Paleolithic” in origin were found in them. Moreover, there was substantial temporal overlap between hunter-gatherer and farmer cultures, but farmers seemed to lack mtDNA typical of hunter-gatherers and vice versa. Confirming the archaeological picture of the two groups avoiding each other, it now seemed that there was little genetic contact between the two, at least in the early age. The Neolithic spread by newcomers; there was no acculturation of Mesolithic people; there was no slow process of admixture between farmer and hunter along a wave of advance.
The gap between contemporaneous farmer and hunter mtDNA gene pools was as large as that found between modern Europeans and native Australians! The whole controversy about the relative contributions of the Neolithic and Paleolithic in the modern European gene pool was found to be beside the point. The modern European gene pool did not seem to be particularly similar to either Paleolithic hunter or Neolithic farmer: it possessed haplogroups completely absent in pre-Neolithic Europe. And, it did not have a high frequency of the N1a “signature” haplogroup of the Neolithic. Selection, migration, or a combination of both had reshaped the European gene pool from the Neolithic onwards.
Where things stand
We have come full circle. Once again, Paleolithic Europeans assume the status of survivors, as their typical lineages are observed in a small minority of modern Europeans. The evidence for widespread acculturation of European hunter-gatherers or their significant genetic contribution to incoming farmers along a wave of advance is just not there. Hunters and farmers possessed distinctive gene pools, and farmers expanded with barely a trace of absorption of hunter gene pools.
Clearly many details remain to be filled out. What does seem certain, however, is that dramatic events took place starting at the Neolithic, and that modern Europeans trace their ancestry principally to Neolithic and post-Neolithic migrants, and not to the post-glacial foragers who inhabited the continent.