Athletic Coach / What about teaching and learning styles?

Successful athletic coaches are excellent teachers. Why? They are results-based people.

Years ago, when I worked as a substitute teacher in high schools, I quickly learned to ask the district to call me whenever a coach needed a sub. Their classrooms were organized for efficient and calm learning. The students were well-behaved, loved their teacher and were ready to participate. Essential teaching materials were available: you’d be surprised how many teachers are utter clutter-bugs, unprepared and couldn’t care less about what goes on in a classroom.  

I’m “looking into” alternative areas of teaching – learning outside the rigid psychology-based nightmare of public non-education. Note how this martial arts instructor focusses on HIS STUDENTS: paying close attention to their needs and individual receptivity to learning styles.  

This results-based approach is common sense, but does require being flexible, paying attention to detail, and sincerely and actively being interested in who the students are as individuals. 

AND knowing your subject thoroughly… “If you can’t explain it to a 6 year-old, you don’t understand it yourself”  

_____________________________________________________________________________________

Visual, Auditory and Kinesthetic Learning Styles in Grappling

by Charles Smith, whitebelt.org

Overview 

People learn in many different ways and no two people learn in exactly the same way. As a coach you can help your players train more efficiently if you teach in a way that takes into account the various differences in their learning styles.

In this article.. I cover three basic styles: visual, auditory and kinesthetic.

Visual learners want to see how something is done. Auditory learners prefer to hear explanations and like to talk their way through things. Kinesthetically oriented people want to get lots of hands-on experience so they can feel how something is done. I’ve covered each of these sensory learning styles in their own article, linked at the bottom of this page.

As you read the articles keep in mind that everyone uses a mix of learning styles. Some people have one dominant style, and use the others only as supplements, while other people use different styles in different circumstances. There is no right mix. People’s learning styles are also quite flexible. Everyone can develop ability in their less dominant styles, as well as increase their skill with styles they already use well.

Note to Coaches:
The key for you as a coach is to present information in a multi-layered mixture of styles. Don’t get stuck teaching in just one mode. Make sure you’re doing all you can for each style and pay particular attention to how you can blend the styles together. Finally, and perhaps most importantly, you should help your students discover their own learning styles and how to make the most of them.

Visual Learning Style

First we’ll look at the visual learning style and how best to teach people who use it.

The visual style of learning is one of the three sensory learning styles along with auditory and kinesthetic. Like the other two, visual learning relates to the fundamental ways in which people take-in information. As you can guess, visual learners learn predominantly with their eyes. They prefer to see how to do things rather than just talk about them. It’s the old monkey see, monkey do kind of thing. (So be careful what you do in front of your children!) Since about 60% of people are visual learners you can count on working with them in every class you teach.

Visual learners prefer to watch demonstrations and will often get a lot out of video taped instruction as well. You can sometimes tell you’re dealing with a visual learner when they ask, “Can I see that again?” Other types of learners would ask if you could do it again, or explain it again, but visual learners will often say they want to see it. It’s just a little sign that the person you’re coaching may be a visual learner.

There are two important guidelines to follow in coaching for visual learners. The first is to make sure you are showing the movements as completely and clearly as possible. If you’re demonstrating a technique and part of the movement is hidden from view, you’ll want to find a way to rearrange things. You may have to get pretty creative, but the main thing is to position yourself so that everything you’re doing is available for viewing.

You also don’t want to rush or cut corners during a demonstration. Players need to see exactly how things should look from beginning to end. Coaches will frequently cover the key part of a technique with precision, but then get sloppy with the rest. Remember, monkey see, monkey do. Visual learners are going to do what they see you doing. They’ll subconsciously pick up on the sloppy movements and begin copying them – often even if you tell them not to.

Those are the two main guidelines for visual coaching: Show everything clearly and show everything exactly as you want it to be done.

Based on those ideas, here are a few things you can do, and not do, to improve your coaching for visual learners.

  • Always take the time to show the technique from a number of different angles and encourage your students to move around and find the best viewing angles.
  • Do not force your students to stay in fixed lines while you demonstrate. This always results in some people blocking the view of others.
  • Give your demonstrations toward the middle of the floor, not near a wall. That way people can get all the way around you.
  • Every now and then throw out a banana. Monkeys like bananas.

Auditory Learning Style

Auditory learners pick up new ideas and concepts better when they hear the information. In this article we’ll look at the auditory learning style and how best to present information to people who favor it.

Recognizing the Auditory Style

Auditory people can often follow directions very precisely after being told only once or twice what to do. Some auditory learners concentrate better when they have music or white noise in the background, or retain new information better when they talk it out.

Since hearing and speaking are so closely related you’ll often find auditory learners using they’re voice as well as their ears. They’ll often repeat what you’ve said right back to you. (Of course, psychologists label this natural auditory learning behavior as “pathological echolalia” in ASD Asperger children.) It helps them process the information. They may also remember complex sets of information by putting them to song or rhythm. Singers are usually skilled auditory learners for example. That’s why they can memorize a song after hearing it just a few times. Auditory people may also ask, “Could you explain that again?” Other types of learners would ask you to do it again, or show it again, but auditory learners want to hear it.

Once you start watching for the signs you’ll see just how many people prefer the auditory style. I believe the experts say that about 30% of Americans are auditory learners. That makes it a good bet you’ll be working with them in any decent sized class.

Organization Techniques

As with the other styles of learning it’s best to let people arrange themselves around you for instruction. Don’t force your students to stay in fixed lines while you demonstrate. Lines always result in some people not being able to hear as well as others – or feeling that they’ve been pushed to the back and can’t ask questions.

I’d suggest giving your demonstrations toward the middle of the floor and not near a wall (as in the typical “lecture style” American classroom.)That way people can get all the way around you to find the best place to listen from. You may have to encourage people to move around you since so many of us are conditioned to being in neat little lines.

Likewise, it’s also a good idea to let people ask questions as soon as they have them. Requiring people to raise their hands or otherwise wait for permission to speak usually squanders the moment when a student is really hot to learn. You’ll just end up back tracking to answer the question anyway, so let people speak up when they want to and rely on informal means to keep things under control.

Expository Techniques

Auditory learners will try to do what you say – exactly what you say. You need to speak clearly and completely or they’re going to head off in the wrong direction for sure. Assuming you’ve got decent speaking skills, the thing to pay most attention to is giving a detailed verbal description of what you’re doing. In other words, you’ve got to put everything into words.

Saying “do it like this” is not enough. It’s talking, sure, but it’s not saying anything.Do it like this” means: Ignore what I’m saying and watch instead. Instead of saying “put your hand here.” Say “put your hand on the inside of the knee.” Instead of saying “push hard,” say “push hard enough to pin their leg down.” Instead of saying, “move over here,” say “move over next to the far leg.” See the pattern? Avoid saying things that assume the player can see what you’re talking about.

Questioning Techniques

Getting verbal helps a lot of auditory learners. When they can both hear something and then say it out loud for themselves it helps them process the information. Most auditory learners like to ask questions too, if given the chance. You can get things started, and give everyone confidence that you like questions, by asking some questions of your own.

I would caution one thing though. Don’t make people feel like they’re being tested by putting them on the spot. Address your question to the group as a whole and don’t slight anyone who answers incorrectly.

One of my favorite ways to tell someone they’ve got it wrong is to use a melodramatic voice and body language to say:

Good answer! Good answer!

Then pause a moment and say:

It’s not the right answer, but it’s a good answer!

Good answer.”

If you ham it up people get the idea that the answer is wrong but there’s no reason to be embarrassed.

Echoing

Verbal interaction is probably one of the weakest areas most coaches have. Perhaps it’s because most of us grew up being told to keep quite in school. Now that we’re the teacher we subconsciously induce our students to do the same. Bad, bad us.

If you’re really having trouble with asking questions, one of the simplest ways to start is a technique called echoing. It works like this:

Coach states: “Grab the near collar.”

Coach immediately asks: “What do you grab?

Athletes echo: “The near collar.”

Coach echoes: “The near collar.”

Echoing is crude, but it works to get people’s jaws moving and that’s a start. Keep it light hearted and try it for a few months. (No, it doesn’t work overnight.) After everyone’s mouth is use to moving start branching out into real questions.

Like I said, echoing is crude, but it’s a start.

By the way, echoing can also be used as a motivational technique. People have to pay more attention to what you’re saying if they know they have to echo what you say.

Meta-Learning Techniques

Finally, and perhaps most importantly, you should help your students become aware of their own auditory style and give them suggestions for putting it to use. What I call rapping is a simple way to start.

Rapping

Rapping is a simple procedure auditory learners can use to help themselves learn a new technique. Using short phrases, students quietly talk their way through the new movements they’re learning. Each step has it’s own little key word description that acts to jog the memory. The player should be able to put together the key words for themselves from the description given by the coach. Once the student starts to get the movements down, they can say the words in rhythm to help smooth out their timing and pace.

Coaches can encourage rapping by asking students if they’ve got the rap down and “let’s hear it.” And hey, maybe you can beat-box for ‘em too! Or not.

Close

Now that you’ve got a grasp of the auditory learning style I think you’ll find you can more precisely target your coaching for a number of your students.

If you haven’t already, I’d recommend taking a look at the other two sensory learning styles, visual and kinesthetic, to round out your knowledge.

Kinesthetic Learning Style

About 10% of the general population are kinesthetic learners. They prefer to learn by getting their body into action and moving around. They are “hands-on” types who prefer doing to talking. (Many ASD Asperger children, despite being labeled clumsy, “do” kinesthetic learning. It’s all that “hands on, let me do it myself, my way” behavior: handling objects, studying them, using them, arranging them – Oh no! That’s defective: punish that child.) In this article we’ll look at the kinesthetic learning style and how best to present information to people who favor it.

Recognizing the Kinesthetic Style

While only about 10% of the general population are kinesthetic learners, it’s a good bet a lot more people in a grappling class are. Only people who enjoy lots of hands-on work tend to keep coming back to something so physical.

As a coach you can count on all of your players to engage in kinesthetic learning. They may not be kinesthetic-oriented by nature, but grappling will eventually shape them into skilled kinesthetic learners. (So let’s get rid of outdoor recess, PE classes and  punish kids when they can’t remain frozen like statues for hours and hours) 

Let me point out a few indicators of the kinesthetic style.

When you’re giving a demonstration the people who always ask you to demonstrate on them so they can feel the technique, are very likely kinesthetic learners (and masochists). (Successful athletes do tolerate an extreme amount of pain, injury, discomfort and failure in order to fulfill goals. The corollary perils of intellectual success are ignored) 

You’ll also see the kinesthetic types following along as you demonstrate – moving their arms and legs in imitation of what you’re doing. Moving is so fundamental to kinesthetic learners that they often just fidget if nothing else. It helps them concentrate better.

Organization Techniques

If you talk for more than ten minutes during a technical demonstration you’ve gone way too long. Kinesthetic learners need to get to the action as soon as possible. Even visual and auditory learners can’t keep track of 10 minutes worth of non-stop details. Three minutes is my rule. If I can’t demonstrate something in under three minutes I usually break it down into smaller chunks. Say what you need to say, don’t say anything else and then get to work.

This is a very important point that relates not just to kinesthetic learners but to everyone in general. It has to do with the relationship between short-term-memory and learning. Check out the article entitled Chunking to find out more.

Meta-Learning Techniques

One of the most important things you can do regarding learning styles is help your students become aware of their own preferences. Be sure to talk to your students about kinesthetics.

Kinesthetics simply refers to an awareness of changes in pressure, momentum, balance and body position in general. It’s all about feeling what you’re doing as you do it. Kinesthetic learning is not particularly difficult to understand but because so many people regard learning as something you do by reading books or listening to lectures, they often haven’t given a great deal of thought to physical movement as a means of study. (Could it be that many ASD, Asperger and ADHD children are acutely aware of “changes in pressure, momentum, balance body position” and other sensory information, but need to utilize kinesthetic learning as an asset, instead of being labeled “defective” and FORCED to mimic “socially acceptable behavior” as a “solution” to social psychology’s conformist agenda?)

For some people, taking a grapping class may actually be the very first time they become consciously aware of kinesthetics, so make sure all of your students know what it is and that they will need to make extensive use of kinesthetic learning methods to succeed. Even predominantly visual and auditory learners need to make use of all the kinesthetic techniques they can. (How radical!) 

Teaching Technique

Essentially, kinesthetic learners need to feel the particular details of what’s happening during a technique. As a coach you want to give your player a very tactile sense of what to do. Provide them with precisely targeted physical contact by setting up situations where the player feels one thing if they move correctly and something else if they move poorly.

For example, if a player is incorrectly leaving his arm out where it might get pulled into an arm bar, have the player tuck in his arm and point out that he should feel his elbow tight up against his own ribs. Then emphasize the way the position feels by pulling on his arm so he is forced to engage his muscles. Tell him to pay attention to his own muscles working away inside his body. (Psycho-social teaching labels “the body” as a  dangerous object that must be “controlled” like an enemy, as opposed to being the essential vehicle for “being” in the world.)

Once he’s got a feel for the proper position, do some repetitions. As the player works on his technique, stop and check the arm to make sure it’s in tight. Tug on it a few times to reinforce the correct feeling and then continue on. After several reps stop checking the arm but keep an eye on it as the player keeps going. If that arm goes slack again slap the piss out of the guy (get his attention) and repeat the whole arm-pulling exercise again. After a few training sessions the player should be keeping his arm in on his own. (Note that the teacher must pay attention to the student! Not just dump poorly delivered information into the “air” and assume it’s the child’s fault for “not getting it”)

Finding a way to physically check a player’s body position is the key. Push, pull, lift, press, whatever – do something that the player must physically react to – then get them to pay attention to the kinesthetic sensations.

Close

Understanding kinesthetic learning is an absolute necessity for grappling coaches. I typically base my own coaching style on the requirements for good kinesthetic learning and then supplement it with the other two sensory learning styles: visual and auditory.

Thanks to Charles Smith and whitebelt.org for allowing us to post this article on Grapplearts

______________________________________________________________________________________________

See also: Posts on Hunter Gatherers

https://aspergerhuman.wordpress.com/2017/11/10/jared-diamond-hunter-gatherer-parenting/

https://aspergerhuman.wordpress.com/2016/02/08/more-on-hunter-gatherer-child-education/

Advertisements

Simple Breakdown / How the Brain Processes Information

https://www.labs.hpe.com/next-next/brain

In 2008, the U.S. Defense Advanced Research Projects Agency issued a challenge to researchers: Create a sophisticated, shoebox-size system that incorporates billions of transistors, weighs about three pounds, and requires a fraction of the energy needed by current computers. Basically, a brain in a box.

Although neuroscience has made important strides in recent years, the inner workings of the brain are still largely a mystery. “So little is really understood about the hardware of the brain—the neurons and their interconnections, and the algorithms that run on top of them—that today, anyone who claims to have built ‘a brain-like computer’ is laughable,” says Stan Williams, a research fellow at Hewlett Packard Labs.

Programs mirror human logic, but they don’t mirror intuitive thought.”

Rich Friedrich, Hewlett Packard Labs

A caveat from HP Labs (super website) regarding the analogy that the human brain like a computer processor. 

________________________________

We have to start somewhere!

eLearning Design and Development

By Christopher Pappas,  November 11, 2016

The brain is often likened to a processor. A complex computing machine that takes raw data and turns it into thoughts, memories, and cognitions. However, it has its limits, and Instructional Designers must know the boundaries before they can create meaningful eLearning courses. In this article, I’ll explore how the brain works, from its basic biological and memory functions to its ability to process information. I’ll also share 3 tips to help you create an eLearning course design that facilitates knowledge absorption and assimilation.

Information Processing Basics: A Guide For Instructional Designers

The brain is a wondrous thing. It transforms letters, numbers, and images into meaningful data that governs every aspect of our lives. Neural pathways spark and new ideas meet with the old to form complex schematic structures. But one of the most miraculous tasks it tackles is learning. As eLearning professionals, we must understand how information processing takes place in order to create effective eLearning experiences.

Brain Biology / The brain consists of many different structures, and the cortex encases all of them. The cortex is the outermost shell of the brain that takes care of complex thinking abilities. For example, memory, language, spatial awareness, and even personality traits. The inner regions of the brain control the most primitive aspects of human nature, such as our base impulses, fears, emotions, and our subconscious. The brain also houses a “subcortex,” which connects directly to the cortex. As such, it’s able to transmit and process information. (A cliché description of “primitive, subconscious”)

The Human Memory

Now that we’ve briefly explored the physical makeup of the brain, let’s delve into one of its most vital functions: memory. After all, memory is crucial for eLearning. If online learners aren’t able to remember the information, then all is for naught. We usually don’t give memory much attention, as it’s an automatic process. Every event, no matter how small, passes through the gates of our memory without us even noticing. However, most of the occurrences are just passing through and never take up permanent residence. There are three types of memory that Instructional Designers should be aware of:

1. Sensory Memory 

When our senses are triggered by a stimulus, our brains briefly store the information. For example, we smell freshly baked bread and can only remember its scent for a few seconds before it vanishes. Even though the bread is no longer in front of us, our mind’s still hold onto its impression for a short period. The brain then has the option to process it through the memory banks or forget about it. In eLearning, sensory memory is triggered by a visually compelling image, background music, or any other element that utilizes the senses.

2. Short-Term Memory

A process that falls under the purview of working memory, which temporarily stores information when it is triggered by stimuli. Short-term memory can only hold a maximum of 7 items at one time. It also has a time limit, which is usually between 10 seconds to a minute.

3. Long-Term Memory

After passing through the short-term memory, relevant information is moved to long-term storage. At this stage, the brain is less likely to forget important details. However, even the long-term memory can diminish over time if we don’t refresh our knowledge.

Information Processing Stages

There are a number of Information Processing theories and models. However, many suggest that the learning process involves three key stages:

Stage 1: Input / The brain is exposed to a stimuli, at which point it analyzes and evaluates the information. For example, the online learner reads a passage and determines whether it’s worth remembering.

Stage 2: Storage / Our brains store the information for later use. It also adds it to our mental schema and encodes it. If the information is not reinforced, the brain may simply forget it over time.

Stage 3: Output / The brain decides what it’s going to do with the information and how it will react to the stimulus. For example, after reading the passage, the individual uses the information they learned to overcome a challenge.

Simple! The question is, How do specific human brains handle these processing tasks? Psychologists would have us believe that there is only ONE way this ought to be accomplished; their way. Bull Shit.

 

Philosophy of Childhood / Stanford

I’m presenting this as a review of where many of our ideas about children, childhood, and “who has rights and who doesn’t” originate – in human thought and ideas (brains), that is, in consequence of poor reasoning, prejudice, personal bias, and thoughtful consideration; by means of accurate and faulty observation, careless assumptions and even (rarely) by clever insight, and not in universal law, in a pre-existing  supernatural realm or in a realm of magical authority.

What we see again, is the lack of coherence between modern Western social-psychological-cultural theory and biological reality. 

https://plato.stanford.edu/entries/childhood/

From: Stanford Encyclopedia of Philosophy

The Philosophy of Childhood

The philosophy of childhood has recently come to be recognized as an area of inquiry analogous to the philosophy of science, the philosophy of history, the philosophy of religion, and the many other “philosophy of” subjects that are already considered legitimate areas of philosophical study. In addition, philosophical study of related topics (such as parental rights, duties and responsibilities) has flourished in recent years. The philosophy of childhood takes up philosophically interesting questions about childhood, changing conceptions over time about childhood and attitudes toward children; theories of cognitive and moral development; children’s interests and children’s rights, the goods of childhood; children and autonomy; the moral status of children and the place of children in society. As an academic subject, the philosophy of childhood has sometimes been included within the philosophy of education (e.g., Siegel, 2009). Recently, however, philosophers have begun to offer college and university courses specifically in the philosophy of childhood. And philosophical literature on childhood, parenting and families is increasing in both quantity and quality.

 1. What is a Child?

Almost single-handedly, Philippe Ariès, in his influential book, Centuries of Childhood (Ariès, 1962), made the reading public aware that conceptions of childhood have varied across the centuries. The very notion of a child, we now realize, is both historically and culturally conditioned. But exactly how the conception of childhood has changed historically and how conceptions differ across cultures is a matter of scholarly controversy and philosophical interest (see Kennedy, 2006). Thus Ariès argued, partly on the evidence of depictions of infants in medieval art, that the medievals thought of children as simply “little adults.” Shulamith Shahar (1990), by contrast, finds evidence that some medieval thinkers understood childhood to be divided into fairly well-defined stages. And, whereas Piaget claims that his subjects, Swiss children in the first half of the 20th Century, were animistic in their thinking (Piaget, 1929), Margaret Mead (1967) presents evidence that Pacific island children were not.

One reason for being skeptical about any claim of radical discontinuity—at least in Western conceptions of childhood—arises from the fact that, even today, the dominant view of children embodies what we might call a broadly “Aristotelian conception” of childhood. According to Aristotle, there are four sorts of causality, one of which is Final causality and another is Formal Causality. Aristotle thinks of the Final Cause of a living organism as the function that organism normally performs when it reaches maturity. He thinks of the Formal Cause of the organism as the form or structure it normally has in maturity, where that form or structure is thought to enable the organism to perform its functions well. According to this conception, a human child is an immature specimen of the organism type, human, which, by nature, has the potentiality to develop into a mature specimen with the structure, form, and function of a normal or standard adult. 

Many adults today have this broadly Aristotelian conception of childhood without having actually read any of Aristotle. It informs their understanding of their own relationship toward the children around them. Thus they consider the fundamental responsibility they bear toward their children to be the obligation to provide the kind of supportive environment those children need to develop into normal adults, with the biological and psychological structures in place needed to perform the functions we assume that normal, standard adults can perform.

Two modifications of this Aristotelian conception have been particularly influential in the last century and a half. One is the 19th century idea that ontogeny recapitulates phylogeny (Gould, 1977), that is, that the development of an individual recapitulates the history and evolutionary development of the race, or species (Spock, 1968, 229). This idea is prominent in Freud (1950) and in the early writings of Jean Piaget (see, e.g. Piaget, 1933). Piaget, however, sought in his later writings to explain the phenomenon of recapitulation by appeal to general principles of structural change in cognitive development (see, e.g., Piaget, 1968, 27).

The other modification is the idea that development takes places in age-related stages of clearly identifiable structural change. This idea can be traced back to ancient thinkers, for example the Stoics (Turner and Matthews, 1998, 49). Stage theory is to be found in various medieval writers (Shahar, 1990, 21–31) and, in the modern period, most prominently in Jean-Jacques Rousseau’s highly influential work, Emile (1979). But it is Piaget who first developed a highly sophisticated version of stage theory and made it the dominant paradigm for conceiving childhood in the latter part of the 20th Century (see, e.g., Piaget, 1971).

Matthews (2008, 2009), argues that a Piagetian-type stage theory of development tends to support a “deficit conception” of childhood, according to which the nature of the child is understood primarily as a configuration of deficits—missing capacities that normal adults have but children lack. This conception, he argues, ignores or undervalues the fact that children are, for example, better able to learn a second language, or paint an aesthetically worthwhile picture, or conceive a philosophically interesting question, than those same children will likely be able to do as adults. Moreover, it restricts the range and value of relationships adults think they can have with their children.

Broadly Aristotelian conceptions of childhood can have two further problematic features. They may deflect attention away from thinking about children with disabilities in favour of theorizing solely about normally developing children (see Carlson 2010), and they may distract philosophers from attending to the goods of childhood when they think about the responsibilities adults have towards the children in their care, encouraging focus only on care required to ensure that children develop adult capacities.

How childhood is conceived is crucial for almost all the philosophically interesting questions about children. It is also crucial for questions about what should be the legal status of children in society, as well as for the study of children in psychology, anthropology, sociology, and many other fields.

2. Theories of Cognitive Development

Any well-worked out epistemology will provide at least the materials for a theory of cognitive development in childhood. Thus according to René Descartes a clear and distinct knowledge of the world can be constructed from resources innate to the human mind (Descartes, PW, 131). John Locke, by contrast, maintains that the human mind begins as a “white paper, void of all characters, without any ideas” (Locke, EHC, 121). On this view all the “materials of reason and knowledge” come from experience. Locke’s denial of the doctrine of innate ideas was, no doubt, directed specifically at Descartes and the Cartesians. But it also implies a rejection of the Platonic doctrine that learning is a recollection of previously known Forms. Few theorists of cognitive development today find either the extreme empiricism of Locke or the strong innatism of Plato or Descartes completely acceptable.

Behaviorism has offered recent theorists of cognitive development a way to be strongly empiricist without appealing to Locke’s inner theater of the mind. The behaviorist program was, however, dealt a major setback when Noam Chomsky, in his review (1959) of Skinner’s Verbal Behavior (1957), argued successfully that no purely behaviorist account of language-learning is possible. Chomsky’s alternative, a theory of Universal Grammar, which owes some of its inspiration to Plato and Descartes, has made the idea of innate language structures, and perhaps other cognitive structures as well, seem a viable alternative to a more purely empiricist conception of cognitive development.

It is, however, the work of Jean Piaget that has been most influential on the way psychologists, educators, and even philosophers have come to think about the cognitive development of children. Piaget’s early work, The Child’s Conception of the World (1929), makes especially clear how philosophically challenging the work of a developmental psychologist can be. Although his project is always to lay out identifiable stages in which children come to understand what, say, causality or thinking or whatever is, the intelligibility of his account presupposes that there are satisfactory responses to the philosophical quandaries that topics like causality, thinking, and life raise.

Take the concept of life. According to Piaget this concept is acquired in four stages (Piaget, 1929, Chapter 6)

  • First Stage: Life is assimilated to activity in general

  • Second Stage: Life is assimilated to movement

  • Third Stage: Life is assimilated to spontaneous movement

  • Fourth Stage: Life is restricted to animals and plants

These distinctions are suggestive, but they invite much more discussion than Piaget elicits from his child subjects. What is required for movement to be spontaneous? Is a bear alive during hibernation? We may suppose the Venus flytrap moves spontaneously. But does it really? What about other plants? And then there is the question of what Piaget can mean by calling the thinking of young children “animistic,” if, at their stage of cognitive development, their idea of life is simply “assimilated to activity in general.”

Donaldson (1978) offers a psychological critique of Piaget on cognitive development. A philosophical critique of Piaget’s work on cognitive development is to be found in Chapters 3 and 4 of Matthews (1994). Interesting post-Piagetian work in cognitive development includes Cary (1985), Wellman (1990), Flavel (1995), Subbotsky (1996), and Gelman (2003).

Recent psychological research on concept formation has suggested that children do not generally form concepts by learning necessary and sufficient conditions for their application, but rather by coming to use prototypical examples as reference guides. Thus a robin (rather, of course, than a penguin) might be the child’s prototype for ‘bird’. The child, like the adult, might then be credited with having the concept, bird, without the child’s ever being able to specify, successfully, necessary and sufficient conditions for something to count as a bird. This finding seems to have implications for the proper role and importance of conceptual analysis in philosophy. It is also a case in which we should let what we come to know about cognitive development in children help shape our epistemology, rather than counting on our antecedently formulated epistemology to shape our conception of cognitive development in children (see Rosch and Lloyd, 1978, and Gelman, 2003).

Some developmental psychologists have recently moved away from the idea that children are to be understood primarily as human beings who lack the capacities adults of their species normally have. This change is striking in, for example, the work of Alison Gopnik, who writes: “Children aren’t just defective adults, primitive grownups gradually attaining our perfection and complexity. Instead, children and adults are different forms of homo sapiens. They have very different, though equally complex and powerful, minds, brains, and forms of consciousness, designed to serve different evolutionary functions” (Gopnik, 2009, 9). Part of this new respect for the capacities of children rests on neuroscience and an increased appreciation for the complexity of the brains of infants and young children. Thus Gopnik writes: “Babies’ brains are actually more highly connected than adult brains; more neural pathways are available to babies than adults.” (11)

3. Theories of Moral Development

Many philosophers in the history of ethics have devoted serious attention to the issue of moral development. Thus Plato, for example, offers a model curriculum in his dialogue, Republic, aimed at developing virtue in rulers. Aristotle’s account of the logical structure of the virtues in his Nicomachean Ethics provides a scaffolding for understanding how moral development takes place. And the Stoics (Turner and Matthews, 1998, 45–64) devoted special attention to dynamics of moral development.

Among modern philosophers, it is again Rousseau (1979) who devotes the most attention to issues of development. He offers a sequence of five age-related stages through which a person must pass to reach moral maturity: (i) infancy (birth to age 2); (ii) the age of sensation (3 to 12); (iii) the age of ideas (13 to puberty); (iv) the age of sentiment (puberty to age 20); and (v) the age of marriage and social responsibility (age 21 on). Although he allows that an adult may effectively modify the behavior of children by explaining that bad actions are those that will bring punishment (90), he insists that genuinely moral reasoning will not be appreciated until the age of ideas, at 13 and older. In keeping with his stage theory of moral development he explicitly rejects Locke’s maxim, ‘Reason with children,’ (Locke, 1971) on the ground that attempting to reason with a child younger than thirteen years of age is developmentally inappropriate.

However, the cognitive theory of moral development formulated by Piaget in The Moral Judgment of the Child (1965) and the somewhat later theory of Lawrence Kohlberg (1981, 1984) are the ones that have had most influence on psychologists, educators, and even philosophers. Thus, for example, what John Rawls has to say about children in his classic work, A Theory of Justice (1971) rests heavily on the work of Piaget and Kohlberg.

Kohlberg presents a theory according to which morality develops in approximately six stages, though according to his research, few adults actually reach the fifth or sixth stages. In this respect Kohlberg’s theory departs from classic stage theory, as in Piaget, since the sequence of stages does not culminate in the capacity shared by normal adults. However, Kohlberg maintained that no one skips a stage or regresses to an earlier stage. Although Kohlberg sometimes considered the possibility of a seventh or eighth stage, these are his basic six:

  • Level A. Premoral

    • Stage 1—Punishment and obedience orientation

    • Stage 2—Naive instrumental hedonism

  • Level B. Morality of conventional role conformity

    • Stage 3—Good-boy morality of maintaining good relations, approval by others

    • Stage 4—Authority-maintaining morality

  • Level C. Morality of accepted moral principles

    • Stage 5—Morality of contract, of individual rights and democratically accepted law

    • Stage 6—Morality of individual principles of conscience

Kohlberg developed a test, which has been widely used, to determine the stage of any individual at any given time. The test requires responses to ethical dilemmas and is to be scored by consulting an elaborate manual.

One of the most influential critiques of the Kohlberg theory is to be found in Carol Gilligan’s In a Different Voice (1982). Gilligan argues that Kohlberg’s rule-oriented conception of morality has an orientation toward justice, which she associates with stereotypically male thinking, whereas women and girls are perhaps more likely to approach moral dilemmas with a “care” orientation. One important issue in moral theory that the Kohlberg-Gilligan debate raises is that of the role and importance of moral feelings in the moral life (see the entry on feminist ethics).

Another line of approach to moral development is to be found in the work of Martin Hoffman (1982). Hoffman describes the development of empathetic feelings and responses in four stages. Hoffman’s approach allows one to appreciate the possibility of genuine moral feelings, and so of genuine moral agency, in a very small child. By contrast, Kohlberg’s moral-dilemma tests will assign pre-schoolers and even early elementary-school children to a pre-moral level.

A philosophically astute and balanced assessment of the Kohlberg-Gilligan debate, with appropriate attention to the work of Martin Hoffman, can be found in Pritchard (1991). See also Friedman (1987), Likona (1976), Kagan and Lamb (1987), and Pritchard (1996).

4. Children’s Rights

For a full discussion of children’s interests and children’s rights see the entry on the rights of children.

5. Childhood Agency and Autonomy

Clearly children are capable of goal-directed behavior while still relatively young, and are agents in this minimal sense. Respect for children’s agency is provided in legal and medical contexts, in that children who are capable of expressing their preferences are frequently consulted, even if their views are not regarded as decisive for determining outcomes.

The exercise of childhood agency will obviously be constrained by social and political factors, including various dependency relations, some of them imposed by family structures. Whether there are special ethical rules and considerations that pertain to the family in particular, and, if so, what these rules or considerations are, is the subject of an emerging field we can call ‘family ethics’ (Baylis and Mcleod 2014, Blustein, 1982, Brighouse and Swift 2014, Houlgate, 1980, 1999).

The idea that, in child-custody cases, the preferences of a child should be given consideration, and not just the “best interest” of the child, is beginning to gain acceptance in the U.S., Canada and Europe. “Gregory K,” who at age 12 was able to speak rationally and persuasively to support his petition for new adoptive parents, made a good case for recognizing childhood agency in a family court. (See “Gregory Kingsley” in the Other Internet Resources.) Less dramatically, in divorce proceedings, older children are routinely consulted for their views about proposed arrangements for their custody.

Perhaps the most wrenching cases in which adults have come to let children play a significant role in deciding their own future are those that involve treatment decisions for children with terminal illnesses. (Kopelman and Moskop, 1989) The pioneering work of Myra Bluebond-Langner shows how young children can come to terms with their own imminent death and even conspire, mercifully, to help their parents and caregivers avoid having to discuss this awful truth with them (Bluebond-Langner, 1980).

While family law and medical ethics are domains in which children capable of expressing preferences are increasingly encouraged to do so, there remains considerable controversy within philosophy as to the kind of authority that should be given to children’s preferences. There is widespread agreement that most children’s capacity to eventually become autonomous is morally important and that adults who interact with them have significant responsibility to ensure that this capacity is nurtured (Feinberg 1980). At the same time it is typical for philosophers to be skeptical about the capacity for children under the age of ten to have any capacity for autonomy, either because they are judged not to care stably about anything (Oshana 2005, Schapiro 1999), lack information, experience and cognitive maturity (Levinson 1999, Ross 1998), or are too poor at critical reflection (Levinson 1999).

Mullin (2007, 2014) argues that consideration of children’s capacity for autonomy should operate with a relatively minimal understanding of autonomy as self-governance in the service of what the person cares about (with the objects of care conceived broadly to include principles, relationships, activities and things). Children’s attachment to those they love (including their parents) can therefore be a source of autonomy. When a person, adult or child, acts autonomously, he or she finds the activity meaningful and embraces the goal of the action. This contrasts both with a lack of motivation and with feeling pressured by others to achieve outcomes desired by them. Autonomy in this sense requires capacities for impulse control, caring stably about some things, connecting one’s goals to one’s actions, and confidence that one can achieve at least some of one’s goals by directing one’s actions. It does not require extensive ability to engage in critical self-reflection, or substantive independence. The ability to act autonomously in a particular domain will depend, however, on whether one’s relationships with others are autonomy supporting. This is in keeping with feminist work on relational autonomy. See the entry on Feminist Perspectives on Autonomy.

Children’s autonomy is supported when adults give them relevant information, reasons for their requests, demonstrate interest in children’s feelings and perspectives, and offer children structured choices that reflect those thoughts and feelings. Support for children’s autonomy in particular domains of action is perfectly consistent with adults behaving paternalistically toward them at other times and in other domains, when children are ill-informed, extremely impulsive, do not appreciate the long-term consequences of their actions, cannot recognize what is in their interest, cannot direct their actions to accord with their interests, or are at risk of significant harm (Mullin 2014).

6. The Goods of Childhood

“Refrigerator art,” that is, the paintings and drawings of young children that parents display on the family’s refrigerator, is emblematic of adult ambivalence toward the productions of childhood. Typically, parents are pleased with, and proud of, the art their children produce. But equally typically, parents do not consider the artwork of their children to be good without qualification. Yet, as Jonathan Fineberg has pointed out (Fineberg, 1997, 2006), several of the most celebrated artists of the 20th century collected child art and were inspired by it. It may be that children are more likely as children to produce art, the aesthetic value of which a famous artist or an art historian can appreciate, than they will be able to later as adults.

According to what we have called the “Aristotelian conception”, childhood is an essentially prospective state. On such a view, the value of what a child produces cannot be expected to be good in itself, but only good for helping the child to develop into a good adult. Perhaps some child art is a counterexample to this expectation. Of course, one could argue that adults who, as children, were encouraged to produce art, as well as make music and excel at games, are more likely to be flourishing adults than those who are not encouraged to give such “outlets” to their energy and creativity. But the example of child art should at least make one suspicious of Michael Slote’s claim that “just as dreams are discounted except as they affect (the waking portions of) our lives, what happens in childhood principally affects our view of total lives through the effects that childhood success or failure are supposed to have on mature individuals” (Slote, 1983, 14).

Recent philosophical work on the goods of childhood (Brennan 2014, Macleod 2010) stresses that childhood should not be evaluated solely insofar as it prepares the child to be a fully functioning adult. Instead, a good childhood is of intrinsic and not merely instrumental value. Different childhoods that equally prepare children to be capable adults may be better or worse, depending on how children fare qua children. Goods potentially specific to childhood (or, more likely, of greatest importance during childhood) include opportunities for joyful and unstructured play and social interactions, lack of significant responsibility, considerable free time, and innocence, particularly sexual innocence. Play, for instance, can be of considerable value not only as a means for children to acquire skills and capacities they will need as adults, but also for itself, during childhood.

7. Philosophical Thinking in Children

For a full discussion of this topic see the entry on Philosophy for Children.

8. Moral Status of Children

It is uncontroversial to judge that what Mary Anne Warren terms paradigmatic humans have moral status (Warren 1992). Paradigmatic humans are adults with relatively standard cognitive capacities for self-control, self-criticism, self-direction, and rational thought, and are capable of moral thought and action. However, the grounds for this status are controversial, and different grounds for moral status have direct implications for the moral status of children. Jan Narveson (1988), for instance, argues that children do not have moral status in their own right because only free rational beings, capable of entering into reciprocal relations with one another, have fundamental rights. While Narveson uses the language of rights in his discussion of moral status (people have direct moral duties only to rights holders on his account), moral status need not be discussed in the language of rights. Many other philosophers recognize children as having moral status because of their potential to become paradigmatic humans without committing to children having rights. For instance, Allen Wood writes: “it would show contempt for rational nature to be indifferent to its potentiality in children.” (Wood 1998, 198)

When children are judged to have moral status because of their potential to develop the capacities of paradigmatic adults (we might call these paradigmatic children), this leaves questions about the moral status of those children who are not expected to live to adulthood, and those children whose significant intellectual disabilities compromise their ability to acquire the capacities of paradigmatic adults. There are then three common approaches that grant moral status to non-paradigmatic children (and other non-paradigmatic humans). The first approach deems moral consideration to track species membership. On this approach all human children have moral status simply because they are human (Kittay 2005). This approach has been criticized as being inappropriately speciesist, especially by animal rights activists. The second approach gives moral status to children because of their capacity to fare well or badly, either on straightforwardly utilitarian grounds or because they have subjective experiences (Dombrowski 1997). It has been criticized by some for failing to distinguish between capacities all or almost all human children have that are not also possessed by other creatures who feel pleasure and pain. The third approach gives moral status to non-paradigmatic children because of the interests others with moral status take in them (Sapontzis 1987), or the relationships they have with them (Kittay 2005)

Sometimes the approaches may be combined. For instance Warren writes that young children and other non-paradigmatic humans have moral status for two sorts of reasons: “their rights are based not only on the value which they themselves place upon their lives and well-being, but also on the value which other human beings place on them.” (1992. 197) In addition to these three most common approaches, Mullin (2011) develops a fourth: some non-paradigmatic children (and adults) have moral status not simply because others value them but because they are themselves capable of being active participants in morally valuable relationships with others. These relationships express care for others beyond their serving as means for one’s own satisfaction. Approaches to moral status that emphasize children’s capacity to care for others in morally valuable relationships also raise interesting questions about children’s moral responsibilities within those relationships (see Mullin 2010).

For more on this topic see the entry on the grounds of moral status.

9. Other Issues

The topics discussed above hardly exhaust the philosophy of childhood. Thus we have said nothing about, for example, philosophical literature on personhood as it bears on questions about the morality of abortion, or bioethical discussions about when it is appropriate for parents to consent to children’s participation in medical research or refuse medical treatment of their children. There has been increasing attention in recent years to questions about the appropriate limits of parental authority over children, about the source and extent of parents and the state’s responsibilities for children, and about the moral permissibility of parents devoting substantial resources to advancing the life prospects of their children. These and many other topics concerning children may be familiar to philosophers as they get discussed in other contexts. Discussing them under the rubric, ‘philosophy of childhood,’ as well in the other contexts, may help us see connections between them and other philosophical issues concerning children.

Self-mythologizing / Homo sapiens NT strikes again

Every once in awhile, I like to check in with neurotypical “pop science” versions of WHO WE ARE – narcissism knows no limits. 

From SLATE.com

Science / The state of the universe. (Not too pompous!)
Jan. 29 2013

Why Are We the Last Apes Standing?

There’s a misconception among a lot of us Homo sapiens that we and our direct ancestors are the only humans ever to have walked the planet. It turns out that the emergence of our kind isn’t nearly that simple. The whole story of human evolution is messy, and the more we look into the matter, the messier it becomes.

)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))

Before we go into this “messy” NT mythology – the author: His website is www.chipwalter.com

Welcome!

At last you have made your way to the website of Chip Walter. (Try to control your excitement.) If you’re a curious person – and your discovery of this site attests that you are – then you’ve arrived at the right place. Go ahead, browse…

Chip is a journalist, author, filmmaker and former CNN Bureau Chief. He has written four books, all of them, one way or another, explorations of human creativity, human nature and human curiosity. (That should be a warning: shameless BS to follow)

(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((

Paleoanthropologists have discovered as many as 27 different human species (the experts tend to debate where to draw the line between groups). These hominids diverged after our lineage split from a common ancestor we shared with chimpanzees 7 million years ago, give or take a few hundred millennia.

Many of these species crossed paths, competed, and mated. Populations ebbed and flowed in tight little tribes, at first on the expanding savannahs of Africa, later throughout Europe, Asia, and all the way to Indonesia. Just 100,000 years ago, there were several human species sharing the planet, possibly more: Neanderthals in Europe and West Asia, the mysterious Denisovan people of Siberia, the recently discovered Red Deer Cave people living in southern China, Homo floresiensis (the Hobbits of Indonesia), and other yet unknown descendants of Homo erectus who left indications that they were around (the DNA of specialized body lice, to be specific). And, of course, there was our kind, Homo sapiens sapiens (the wise, wise ones), still living in Africa, not yet having departed the mother continent. At most, each species consisted of a few tens of thousands of people hanging on by their battered fingernails. Somehow, out of all of these struggles, our particular brand of human emerged as the sole survivor and then went on, rather rapidly, to materially rearrange the world.

If there once were so many other human species wandering the planet, why are we alone still standing? After all, couldn’t another version or two have survived and coexisted with us on a world as large as ours? Lions and tigers coexist; so do jaguars and cheetahs. Gorillas, orangutans, bonobos, and chimpanzees do as well (though barely). Two kinds of elephants and multiple versions of dolphins, sharks, bears, birds, and beetles—countless beetles—inhabit the planet. Yet only one kind of human? Why?

More than once, one variety may have done in another either by murdering its rivals outright or outcompeting them for limited resources. But the answer isn’t as simple or dramatic as a war of extermination with one species turning on the other in some prehistoric version of Planet of the Apes. The reason we are still here to ruminate on why we are still here is because, of all those other human species, only we evolved a long childhood.

Over the course of the past 1.5 million years, the forces of evolution inserted an extra six years between infancy and pre-adolescence—a childhood—into the life of our species. And that changed everything.

Why should adding a childhood help us escape extinction’s pitiless scythe? Looked at logically, it shouldn’t. All it would seem to do is lengthen the time between birth and mating, which would slow down the clamoring business of the species’ own continuance. But there was one game-changing side effect of a long childhood. Those six years of life between ages 1 and 7 are the time when we lay the groundwork for the people we grow up to become. Without childhood you and I would never have the opportunity to step away from the dictates of our genes and develop the talents, quirks, and foibles that make us all the devastatingly charming, adaptable, and distinctive individuals we are.

Childhood came into existence as the result of a peculiar evolutionary phenomenon known generally as neoteny. (More about this sweeping misinterpretation later) The term comes from two Greek words, neos meaning “new” (in the sense of “juvenile”) and teinein meaning to “extend,” and it means the retention of youthful traits. In the case of humans, it meant that our ancestors passed along to us a way to stretch youth farther into life.

More than a million years ago, our direct ancestors found themselves in a real evolutionary pickle. One the one hand, their brains were growing larger than those of their rain forest cousins, and on the other, they had taken to walking upright because they spent most of their time in Africa’s expanding savannas. Both features would seem to have substantially increased the likelihood of their survival, and they did, except for one problem: Standing upright favors the evolution of narrow hips and therefore narrows the birth canal. And that made bringing larger-headed infants to full term before birth increasingly difficult.

If we were born as physically mature as, say, an infant gorilla, our mothers would be forced to carry us for 20 months! But if they did carry us that long, our larger heads wouldn’t make it through the birth canal. We would be, literally, unbearable. The solution: Our forerunners, as their brains expanded, began to arrive in the world sooner, essentially as fetuses, far less developed than other newborn primates, and considerably more helpless.

Bolk enumerated 25 specific fetal or juvenile features that disappear entirely in apes as they grow to adulthood but persist in humans. Flatter faces and high foreheads, for example, and a lack of body hair. The shape of our ears, the absence of large brow ridges over our eyes, a skull that sits facing forward on our necks, a straight rather than thumblike big toe, and the large size of our heads compared with the rest of our bodies. You can find every one of these traits in fetal, infant, or toddling apes, and modern human adults.

In the nasty and brutish prehistoric world our ancestors inhabited, arriving prematurely could have been a very bad thing. But to see the advantages of being born helpless and fetal, all you have to do is watch a 2-year-old. Human children are the most voracious learners planet Earth has ever seen, and they are that way because their brains are still rapidly developing after birth. Neoteny, and the childhood it spawned, not only extended the time during which we grow up but ensured that we spent it developing not inside the safety of the womb but outside in the wide, convoluted, and unpredictable world.

The same neuronal networks that in other animals are largely set before or shortly after birth remain open and flexible in us. Other primates also exhibit “sensitive periods” for learning as their brains develop, but they pass quickly, and their brain circuitry is mostly established by their first birthday, leaving them far less touched by the experiences of their youth.

The major problem with all this NT self-congratulatory aggrandizement is this: the equally possible scenario that this “open, externalized brain development” leaves human fetuses-infants-children highly vulnerable to disastrous consequences: death in infancy by neglect, disease and predation; maternal death, brain and nervous system damage due to not-so-healthy human environments, insufficient care and nutrition during critical post-birth growth, plus the usual demands and perils of nature.  And in “modern” societies, the necessity of a tremendous amount of medical-technological intervention in problem pregnancies: extreme premature birth, caesarian section delivery, long periods of ICU support, and growing incidence of life-long impairment.    

“Inattentional Blindness” to any negative consequences of human evolution is a true failure in NT perception of the human condition.

Based on the current fossil evidence, this was true to a lesser extent of the 26 other savanna apes and humans. Homo habilis, H. ergaster, H. erectus, even H. heidelbergensis (which is likely the common ancestor of Neanderthals, Denisovans, and us), all had prolonged childhoods compared with chimpanzees and gorillas, but none as long as ours. In fact, Harvard paleoanthropologist Tanya Smith and her colleagues have found that Neanderthals reversed the trend. By the time they met their end around 30,000 years ago, they were reaching childbearing age at about the age of 11 or 12, which is three to five years earlier than their Homo sapiens cousins. Was this in response to evolutionary pressure to accelerate childbearing to replenish the dwindling species? Maybe. But in the bargain, they traded away the flexibility that childhood delivers, and that may have ultimately led to their demise.

Aye, yai, yai! This string of NT echolalia, copied and pieced together from pop-science interpretations of “science projects” is worthy of Biblical mythology… a montage, a disordered mosaic; a collage of key words, that condenses millions of years of evolutionary change into a “slightly longer” (call it 6 million years instead of 6 thousand – sounds more scientific) – history of Creation… this is for neurotypical consumption: It’s okay… Evolution is really just magic, after all! 

We are different. During those six critical years, our brains furiously wire and rewire themselves, capturing experience, encoding and applying it to the needs of our particular life. Our extended childhood essentially enables our brains to better match our experience and environment. (Whatever that is supposed to mean – like wearing Bermuda shorts to the beach?) It is the foundation of the thing we call our personalities, the attributes that make you you and me me. Without it, you would be far more similar to everyone else, far less quirky and creative and less, well … you. Our childhood also helps explain how chimpanzees, remarkable as they are, can have 99 percent of our DNA but nothing like the same level of diversity, complexity, or inventiveness.

You are creative and quirky (dull and conformist) – and even if that’s a shameless lie (it is), AT LEAST you’re smarter than a chimpanzee!  

Our long childhood has allowed us to collectively engage in ever broadening conversations as we keep finding new ways to communicate; we jabber and bristle with invention and pool together waves of fresh ideas, good and bad, into that elaborate, rambling edifice we call human civilization. Without all of this variety, all of these interlocked notions and accomplishments, the world, for better or worse, would not be as it is, brimming with this species of self-aware conflicted apes, ingenious enough to rocket rovers off to Mars and construct the Internet, wage wars on international scales, invent both WMDs and symphonies. If not for our long childhoods, we would not be here at all, the last apes standing. Can we remain standing? Possibly. I’m counting on the child in us, the part that loves to meander and play, go down blind alleys, wonder why and fancy the impossible.

How shockingly stupid (and awful) writing. 

 

Child Abandonment / An Asperger Experience

I find the subject of abandonment to be an unrecognized and serious consequence of being diagnosed / labeled Autistic or Asperger’s. To live in a household, as a member of a human community, and to be singled out as a mistake, by one’s own kind, out of the millions of diverse and exotic life forms that evolution has produced – How is this not abandonment? 

Some children are literally abandoned; for others it’s a mysterious social-emotional estrangement that no one ever talks about. A “wall” that is indescribable; suffocating, and overwhelming; a dark anxiety that threatens the very biological expectation of every helpless newborn to be loved. Love is survival; acceptance is love.

_____________________________________

Child Abandonment Stories in Folklore and Fairy Tales

http://austinhackney.co.uk/2016/04/07/child-abandonment-stories-in-folklore-and-fairy-tales/

by Austin Hackney

The Contemporary Relevance of Folkloric Traditions

I’d like to write about child abandonment stories. It’s something that’s been on my mind recently. Partly for personal reasons I won’t go into here, and partly reflecting on the fate of thousands of solitary refugee children who find themselves separated from their families; alone and abandoned in lands to them as foreign, alien and threatening as any dark forest in a fairy tale.

In contemporary folkloric literature and analyses of fairy tales a great deal is often written about images of abuse, incest, mutilation and other darker aspects which we find as common themes in such contexts. Not so much is written about the theme of child abandonment.

However, I do think it’s one worth examining. It has a rather stark and contemporary relevance. It’s certainly a theme which touches all our of lives at some point. In childhood, in intimate relationships, in bereavement, a sense of being abandoned, however temporarily, is something we all know. Little wonder, then, that it plays such a large part in many fairy tales, mythologies, and folkloric traditions.

Child Abandonment Stories Around the World

Child abandonment stories appear in many different forms in folklore, legends, and tales all over the globe. Perhaps the most commonly known manifestation of this motif, at least in the European West, is that of the abandonment of children by their parents.

One may think of Hansel and Gretel, whose parents abandon them in the dark forest because they can no longer afford to clothe and feed them. It would be possible to make any number of symbolic interpretations, of course, but there’s little doubt in my mind the root of such tales is embedded deeply in historical facts.

In other stories there are darker suggestions yet. Children are abandoned because they are born as a consequence of an incestuous union, or they have themselves been used in such fashion.

In some tales we find the child is cursed, marked, or in some other way ill-omened. In the Celtic fairy traditions children who are considered to have been parented by supernatural beings, the devil; or to be changelings swapped by the fairy folk; or prophesied to destroy one or other of the parents, are frequently abandoned even as newborn babies.

Again, this practice most likely finds its origin in history rather than fantasy. Many contemporary and near-contemporary anthropological studies show how religious or folk beliefs, and more often economic and social pressures, lead to situations in which children may either not be supported, or for other reasons cannot be accepted by their social group, are abandoned or even slain.

In many primary, patriarchal societies where males are highly valued, a female child may risk being “exposed” in the wild and left to die. This is commonly the case in cultures with a dowry system in which the parents simply cannot afford to “marry off” a daughter.

The Fairy Tale Perspective

In these historical and contemporary social contexts the likelihood of survival for these children is, of course, very low. It’s intended they should die. In folklore, as in fairy tale, things turn out very differently.

Most fairy tale children will survive. Most will overcome their abandonment, achieving personal transformation and success or, ultimately, reunion and reintegration with parents and society as powerful and independent adults. (Not a likely outcome for ASD, Asperger children) 

In this respect it may not be wholly fantastical to speculate that aspects of abandonment stories may incorporate elements of ritual initiation into adulthood; whereby the child is ritually abandoned or slain, in order to return or be reborn as an adult.

Adoption by Animal Guardians

Frequently in the folkloric child abandonment stories, the abandoned child or children is or are adopted by animal guardians. They may also fall into the hands of friendly or wicked witches, childless peasants or other guides and carers. (Asperger children often seek these mentors on their own) In some circumstances they find themselves adopted into royal households. One need only think of Moses, abandoned in his wicker basket and adopted by the pharaoh himself. Or, in Greek legend, Oedipus (whose name, interestingly, refers to the binding of his feet when he was abandoned in the mountains). Turning to the North European tradition, the tales of Havelock the Dane who was unjustly displaced from his royal heritage as a child. Of course, such children live and grow not only to survive but typically, against all striving of the antagonists, to overcome their fate, or fulfill prophecy, or achieve the endowment of supernatural powers.

The Inuit tales are particularly interesting in this regard. Frequently, children abandoned in their story lines not only become great hunters or warriors, but return triumphantly from their exile and use their new skills to feed or protect those who rejected them. There’s also a minor tradition in some Native American folklore in which abandoned children return and use their power to avenge themselves, often by the murder of their parents or elders.

It is not, however, only the young who are at risk of abandonment.

Abandonment of the Elders

Exposure of the elderly and infirm has also proved common among nomadic peoples. When a person is no longer seen as viable (think of our own term invalid, meaning no longer valid); no longer able to make a supportive contribution to the group as a whole, and merely a drain on the resources of others, she is frequently abandoned in the wild to meet her last days, usually dying of cold or starvation. (Much more common in the U.S. than Americans will admit; substandard nursing homes and shelters serve as “the wild” today) 

It would be a mistake to associate this custom exclusively with nomadic tribal peoples of ancient times. It was common enough among the white European invaders of Turtle Island (now more popularly known as North America). The settlers driving West in their pioneer wagons left thousands of old people to die on the plains, in the mountains, and in the deserts along the way.

In a certain sense, and it’s a very real one, we continue this practice today in modern Britain. I’m not thinking only of the increasing numbers of elderly homeless persons we see begging on the streets. I’m also thinking of the literally millions of our Elders effectively abandoned in impersonal institutions whose primary motive is the profit margin, and whose appalling standards of care have been the subject of recent government inquiry. This is in stark contrast to my experience in my adopted home land of Italy, where it is fully expected that the family will rally together to care for the Elders, who play a full part in the family home until they breathe their last.

Abandonment of the Spouse

Another aspect of this motif, perhaps best exemplified in the Eurasian folktales of the “handless maiden”, is the abandonment of the wife. The cutting off of the hands by the husband before expulsion from the home is no mere invention of the dark folkloric imagination. Actual mutilation of this type was a common punishment in times gone by, intended to disable the rejected person from feeding and fending for themselves. There are still countries today, Saudi Arabia springs to mind, in which various forms of mutilation, including cutting off hands, remain as punishments enshrined in law for female adulterers and the victims of rape.

Not all instances of abandonment in folklore and fairy tales are directly linked to such dark origins. Sometimes it’s simply a storytelling device which enables the heroine or hero to be liberated from the constraints of domesticity, and to participate fully in the adventure which will transform their lives.

Child Abandonment in Contemporary Children’s Literature

This more positive use of child abandonment stories is commonly reflected in adventure and fantasy stories of our own time written for children and teenagers. The protagonists in such tales are frequently abandoned, either through mischance, bereavement, or just for the holidays, so they can be the prime movers in their own stories without the guidance and restriction of parental authority. (An Asperger child may choose this path of development, upon recognizing the futility of relying on parents or authority figures for aid.)  

While written in the steampunk genre, my own recently published novel, Beyond the Starline, uses this time-honored technique to pitch the protagonist forward into adventure. It also relies heavily on imagery drawn from folklore – the maiden in the tower, the animal guide, the girl disguised as a boy, and so on. It also deals with the attempt to reconcile the desire for belonging with the desire for freedom, (a perpetual Asperger dilemma) and explores the childhood experience of abandonment by parents and society. In that sense I consider it a fairy tale, even though its setting and technologies are “retrofuturistic.”

In any case, whether in terms of giving us an insight into the past, a lens through which to examine and critique our own society, or as inspiration for new forms of storytelling, the ancient motifs of folklore and fairy story continue to exercise their power in profoundly relevant and contemporary ways. If we abandon them, then we abandon ourselves.

 

Physical Education and Sport / Ancient Times to Enlightenment

EUROPEAN JOURNAL OF EDUCATIONAL RESEARCH / Vol. 2, No. 4, 191-202 / ISSN 2165-8714 Copyright © 2013 EUJER

“Bikini Girls” exercising, Sicily, 4th C. AD

https://files.eric.ed.gov/fulltext/EJ1086323.pdf

Harmandar Demirel & Yıldıran / Dumlupinar University, Gazi University, Turkey

(I’ve broken the text into shorter paragraphs for easier reading and omitted some introductory material. Complete pdf is about 8 pages. I’ve high-lightened a few main ideas and vocabulary.)

My general comment is that American Public Education is essentially less “sophisticated” than even Ancient Greece and Rome; a disgrace and “Medieval”…

An Overview from the Ancient Age to the Renaissance

The Greek educational ideal which emerged during the 8th – 6th centuries B.C. aimed at developing general fitness via “gymnastics” and the “music” of the body; that is, the development of body and spirit in a harmonic body and, in this way, providing a beautiful body, mental development and spiritual and moral hygiene. These are expressed by the word Kalokagathia, meaning both beautiful and good, based on the words “Kalos” and “Agathos” (Aytaç, 1980; Alpman, 1972). Thus, the use of physical training and sport as the most suitable means as discussed first in Ancient Greece (Yildiran, 2005). To achieve the ideal of kalokagathia, three conditions were required: nobility, correct behaviour and careful teaching (Yildiran, 2011). Physical beauty (kalos) did not refer just to external appearance; it also referred to mental health. Humans who had these qualifications were considered ideal humans (kalokagathos) (Bohus, 1986). The idea of the Kalokagathia ideal, which was developed during the early classical age, had seen archaic-aristocratic high value “arete”s thinned and deepened (Popplow, 1972).

The vital point of aristocratic culture was physical training; in a sense, it was sport. The children were prepared for various sport competitions under the supervision of a paidotribes (a physical education teacher) and learned horse riding, discus and javelin throwing, long jumping, wrestling and boxing. The aim of the sport was to develop and strengthen the body, and hence, the character (Duruskken, 2001). In Ancient Greece, boys attended wrestling schools because it was believed that playing sports beautified the human spirit as well as the body (Balcı, 2008). The palaestra was a special building within ancient gymnasiums where wrestling and physical training were practiced (Saltuk, 1990). The education practiced in this era covered gymnastic training and music education, and its aim was to develop a heroic mentality, but only for royalty. With this goal in mind, education aimed to discipline the body, raising an agile warrior by developing a cheerful and brave spirit (Aytac, 1980).

The feasts which were held to worship the gods in Ancient Greece began for the purpose of ending civil wars. All sport-centred activities were of religious character. As the ancient Olympic Games were of religious origin, they were conducted in Olympia. (Home of the gods) Over time, running distances increased, new and different games were added to the schedule, soldiers began to use armour in warfare, art and philosophy were understood better and great interest was shown in the Olympic Games; therefore, the program was enriched and changed, and the competitions were increased from one to five days (Er et al., 2005). However, the active or passive attendance of married women was banned at the ancient Olympic Games for religious reasons (Memis and Yıldıran, 2011). The Olympic Games had an important function as one of the elements aimed at uniting the ancient Greeks culturally, but this ended when the games were banned by Emperor Theodosius 1st in 393-4 A.D. (Balci, 2008).

Sparta, which is located in the present-day Mora peninsula, was an agricultural state that had been formed by the immigration of Dors from the 8th century B.C. Spartan education provided an extremely paternalistic education, which sought the complete submergence of the individual in the citizen and provided him with the attributes of courage, complete obedience and physical perfection (Cordasco, 1976). In Sparta, where the foundations of social order constituted iron discipline, military proficiency, strictness and absolute obedience, the peaceful stages of life had the character of a “preparation for the war school” (Aytac, 1980). The essential thing that made Hellenic culture important was its gaining new dimensions with distinctive creative power regarding cultural factors that this culture had adopted from the ancient east, and its revealing of the concept of the “perfect human” (Iplikcioglu, 1997).

Children stayed with their family until they were seven years old; from this age, they were assigned to the state-operated training institutes where they were trained strictly in war and state tasks. Strengthening the body and preparing for war took a foremost place in accordance with the military character of the state. Girls were also given a strict military training (Aytac, 1980). The same training given to the boys was also given to the girls. The most prominent example of this is the girls and boys doing gymnastics together (Russel, 1969). Although physical training and music education were included, reading, writing and arithmetic were barely included in Spartan education (Binbasioglu, 1982).

Unlike Sparta, the classical period of Athenian democracy (Athens had advanced trade and industry) included the Persian Wars and Peloponnese Wars, and Cleisthenes’ democratic reforms and the ending of sea domination in domestic policy. As this democracy covered “the independent layer”, it took the form of an “aristocratic democracy” (Aytaç, 1980). Learning was given great importance in the Athenian democracy. The sons of independent citizens received education in grammar and at home or private school. Music education and gymnastic training were carried out in “Gymnasiums” and “Palestrae”, which were built and controlled by the state; running areas were called “Dramos”, and chariot race areas were termed “Hippodromes” (Aytac, 1980). Children older than 12 years started receiving sports training and music education in Athens, where the military training was barely included.

Athenians insisted on the aesthetical and emotional aspects of education. Therefore, the best art works of the ancient world were created in this country (Binbasioglu, 1982). As in the 5th century B.C., Greek education was unable to appropriately respond to new developments; Sophists emphasised the development of traditional education in terms of language and rhetoric in an attempt to overcome the crisis. Sophists provided education in the morals, law, and the natural sciences in addition to the trivium, grammar, rhetoric, dialectic) (Aytac, 1980).

Greeks considered physical training prudent and important because it developed the body and organised games conducive to the gathering of large crowds; in these games, all regions of Greece were represented (Balci, 2008). Rome constitutes the second most important civilisation of the Ancient age. In Rome, the family played the strongest role in education, and the state did not have much say or importance. While exercise constituted the means of education in Ancient Rome, the purpose of this education was “to raise a good citizen”, such that each person had a skilled, righteous and steady character. Physical training was provided in addition to courses such as mythology, history, geography, jurisprudence, arithmetic, geometry and philosophy; this training was provided in Grammar schools, where basic teaching covered the “Seven free arts” (Aytac, 1980).

Due to the Scholastic structure of the Middle Ages, values respecting the human were forgotten. However, the “Renaissance” movement, which started in Europe and whose ideas inform the modern world, developed many theories related to education and physical training and attempted to apply this in various ways; the development of these ideas was continued in “The Age of Enlightenment”.

The Renaissance General Aspects of the Renaissance

The word renaissance means “rebirth”; in this period, artists and philosophers tried to discover and learn the standards of Ancient Rome and Athens (Perry et al., 1989). In the main, the Renaissance represented a protest of individualism against authority in the intellectual and social aspects of life (Singer, 1960). Renaissance reminded “Beauty’’ lovers of the development of a new art and imagination. From the perspective of a scientist, the Renaissance represented innovation in ancient sciences, and from the perspective of a jurist, it was a light shining over the shambles of old traditions.

Human beings found their individuality again during this era, in which they tried to understand the basics of nature and developed a sense of justice and logic. However, the real meaning of “renaissance” was to be decent and kind to nature (Michelet, 1996). The Renaissance was shaped in Italy beginning from the 1350s as a modern idea contradicting the Middle Ages. The creation of a movement for returning to the old age with the formidable memories of Rome naturally seemed plausible (Mcneill, 1985). New ideas that flourished in the world of Middle Age art and developed via various factors did not just arise by accident; incidents and thoughts that developed in a social context supported it strongly (Turani, 2003). Having reached its climax approximately in the 1500s, the Italian Renaissance constituted the peak of the Renaissance; Leonardo da Vinci observed the outside world, people and objects captiously via his art and Niccolo Machiavelli’s drastically analysed nature and use of politics through his personal experiences and a survey of classical writers (Mcneill, 1985).

The Concept of Education and Approaches to Physical Training during the Renaissance

The humanist education model, which was concordant with the epitomes of the Renaissance, was a miscellaneous, creative idea. Its goal was to create an all-round advanced human being, “homo universale”. At the same time, such an educational epitome necessarily gained an aristocratic character. This educational epitome no longer provided education to students at school (Aytac, 1980).

In 14th century, the “humanist life epitome” was claimed. The humanism movement was gradually developing and spreading; however, in this phase, humanism-based formation or practice was not in question. In the history of humanity, the humanism period has been acknowledged as a ‘transitional period’. Modern civilisation and education is based on this period. Philosophers, such as Erasmus, Rabelais, Montaigne and Luther, flourished during this period. Universities began to multiply, and latitudinarianism was created. Scholastic thought was shaken from its foundations at the beginning of this period via the influence of Roger Bacon (scientist), who lived during the 13th Century.

Original forms of works constituting the culture of Ancient Athens and Rome were found, read, and recreated concordantly; moreover, the ideas of latitudinarian, old educators such as Quintilianus were practiced. In teaching methods, formulae enabling pupils to improve their skills and abilities were adopted. Students started to learn outdoors, in touch with nature. Strict disciplinary methods gave way to rather tolerant methods. The importance and value of professional education were acknowledged (Binbasioglu, 1982). Positive sciences, such as history, geography and natural history were not given a place in the classroom for a long time, but Latin preserved its place until recent times (Aytac, 1980).

With Desiderius von Erasmus, who was alive during the height of European humanism, humanism adopted its first scientific principle: “Return to sources!’’; for this reason, the works of ancient writers were published. Erasmus’ educational epitome consists of a humanist-scientific formulation; however, it does not externalise the moral-religious lifestyle. Having worked to expand humanity into higher levels, Erasmus summarises the conditions for this quest as follows: good teachers, a useful curriculum, good pedagogical methods, and paying attention to personal differences among pupils. With these ideas, Erasmus represents the height of German humanist pedagogy (Aytaç, 1980).

Notice the antagonistic set up between faith and science we still experience today in the U.S.?

On the other hand, Martin Luther considered universities as institutions where “all kinds of iniquity took place, there was little faith to sacred values, and the profane master Aristotle was taught imprudently” and he demanded that schools and especially universities be inspected. Luther thought that schools and universities should teach religiously inclined youth in a manner heavily dependent on the Christian religion (Aytac, 1980). Alongside these ideas, Luther made statements about the benefits of chivalric games and training, and of wrestling and jumping to health, which, in his opinion, could make the body more fit (Alpman, 1972).

The French philosopher Michel de Montaigne, known for his “Essays”, was a lover of literature who avoided any kind of extreme and was determined, careful and balanced. In his opinion, the aim of education was to transfer “ethical and scientific knowledge via experiments’’ to pupils. De Montaigne believed that a person’s skills and abilities in education, which can be called natural powers, are more important than or even superior to logic and society (Binbasioglu, 1982). The Humanist movement has played a very significant role in educational issues. This movement flourished in order to resurrect the art and culture of ancient Athens and Rome with their formidable aspects, thereby enabling body and soul to improve concordantly with the education of humans (Alpman, 1972). Humanism was not a philosophical system but a cultural and educational program (Kristeller, 1961).

Note that in the United States, current public education is obsessed with “social engineering” based on two religious ideologies: (1. liberal / puritanical – (social and psychological theory-based; conformity to prescriptive “absolutes” of human behavior.) 2.  evangelical – anti-science, faith-based denial of reality; socio-emotional fervor.) These competing religious systems have replaced a brief period of “humanist” academic emphasis; the arts and physical education have been jettisoned, supposedly due to “budget” limitations… but this elimination of “expressions of individual human value” is a choice made by parents and educators to “ban” secular ideals from education)  

The necessity of physical training along with education of soul and mind has been emphasised; for this reason, physical practices and games have been suggested for young people. It is possible to see how the humanists formed the foundations of the Renaissance, beginning from the 14th century to the 18th century and working from Italy to Spain, Germany, France and England. Almost all of the humanists stated the significance of physical training in their written works on education (Alpman, 1972).

One of the humanists, Vittorino da Feltre may have viewed it as the most pleasant goal of his life to raise a group of teenagers and fed and educated poor but talented children at his home (Burckhardt, 1974). Feltre practiced a classical education in his school called “Joyful Residence”. In accord with Ancient Greek education concepts, he claimed that benefits were provided by the education of body and soul through daily exercises such as swimming, riding and swordplay, and generating love towards nature via hiking; he also emphasised the importance of games and tournaments (Alpman, 1972; Aytac, 1980). Enea Silvio de Piccolomini is also worthy of attention; alongside his religious character, he thought that physical training should be emphasised and that beauty and power should be improved in this way (Alpman, 1972). de Piccolomini attracted attention to the importance of education as a basis for body and soul while stressing the importance of avoiding things that cause laxity, games and resting (Aytac, 1980). Juan Ludwig Vives, a systematic philosopher who had multiple influences, in one of his most significant works “De Tradendis Disciplinis”, which was published in 1531, advised such practices as competitive ball playing, hiking, jogging, wrestling and braggartism, beginning from the age of 15 (Alpman, 1972).

The German humanist Joachim Camerarius, who managed the academic gymnasium in the city of Nürnberg, is also very important in relation to this subject. Having practicing systematic physical training at the school in which he worked, Camerarius wrote his work, “Dialogus de Cymnasis”, which refers to the pedagogical and ethical values of Greek gymnastics. In this work, he stressed such practices as climbing, jogging, wrestling, swordplay, jumping, stone throwing and games that were practiced by specially selected children according to their ages and physical abilities, all under the supervision of experienced teachers (Alpman, 1972). The Italian Hieronymus Mercurialis’ De Arte Gymnastica, first published in Latin in Venice in 1569, contained very little on the Olympic Games. Indeed, the author was hostile to the idea of competitive athletics. The Frenchman Petrus Faber’s Agonisticon (1592), in its 360 pages of Latin text, brought together in one place many ancient texts concerning the Olympics but was disorganised, repetitive and often unclear (Lee, 2003). The first part of the De Arte Gymnastica included the definition of Ancient Greek gymnastics and an explanation of actual terminology whereas the second part contained precautions about the potential harms of exercises practiced in the absence of a doctor. Moreover, he separated gymnastics practised for health reasons from military gymnastics (Alpman, 1972).

Note the military requirement for it’s personnel to be “physically fit” compared to the general U.S. population, (including children), which is chronically obese, sedentary and unhealthy. “Being physically fit” (at least the appearance of) is now a status symbol of the wealth classes and social celebrities, requiring personal trainers, expensive spa and gym facilities, and high-tech gadgets and equipment.    

The Transition to the Age of Enlightenment: Reformation, Counter-reformation and the Age of Method

The Age of Reformation: The most significant feature of European cultural life during this age was the dominant role played by religious issues, unlike the Renaissance in Italy (Mcneill, 1985). This age symbolises the uprising of less civilised societies against logic-dominated Italy (Russell, 2002). Bearing a different character from Renaissance and Humanism, the Reformation did not stress improvements in modern art or science, but rather improvements in politics and the Church; consonant with this, its education epitome emphasised being religious and dependent on the Church. Nevertheless, both Humanism and the Reformation struggled against Middle Ages scholasticism, and both appreciated the value of human beings (Aytac, 1980).

The Counter-reformation Movement: In this period, which includes the movement of the Catholic church to retake privileges that it had lost due to the Reformation, the “Jesuit Sect’’ was founded to preach, confess and collect “perverted minds’’ once again under the roof of the Catholic church via teaching activities (Aytac, 1980).

The Age of Method: Also known as the Age of Practice, this period saw efforts to save people from prejudice, and principles for religion, ethics, law and state were sought to provide systematic knowledge in a logic-based construction. Aesthetic educational approaches, which were ignored by religion and the Church because of the attitudes prevailing during the Reformation and Counterreformation, were given fresh emphasis. Bacon, Locke, Ratke, Komensky, Descartes and Comenius are among the famous philosophers who lived during this period (Aytac, 1980).

The Age of Enlightenment General Features and Educational Concepts of the Enlightenment

The Enlightenment Period had made itself clear approximately between 1680 and 1770 or even 1780. Science developed into separate disciplines, literature became an independent subject, and it was demanded that history also become independent (Chaunu, 2000). During this period, educators transformed the concept of education from preparing students for the afterlife into preparing them for the world around them, so that they could be free and enlightened.

Moreover, educators of the period were usually optimistic and stressed the importance of study and work. At school, students were educated in such a way as to engrain a love of nature and human beings. Based on these ideas, learning was undertaken by experiment and experience (Binbasioglu, 1982). William Shakespeare mentioned the concept of “Fair Play” and the ideas of “maintain equality of opportunity” and “show the cavalier style of thinking” at the end of the 16th century; by the 18th century, these ideas were included in sport (Gillmeister, 1988). Systematic changes in the foundations of the principles of fair play that occurred in the 19th century were directly related to the socio-cultural structure of Victorian England (Yildiran, 1992).

The Concept of Physical Training during the Enlightenment and Its Pioneers Ideas and epitomes produced prior to this period were ultimately practiced in this period. Respected educators of the period stressed the significance of physical training, which appealed only to the aristocracy during the Renaissance; simulating the education system of the Ancient Age, educators started to address everyone from all classes and their views spread concordantly in this period.

John Locke: The Enlightenment reached maturity during the mid-to late eighteenth century. John Locke, a lead player in this new intellectual movement (Faiella, 2006), was likely the most popular political philosopher during the first part of the 18th century, who stressed the necessity of education (Perry et al., 1989). Locke’s “Essay on Human Intellect” is acknowledged as his most prominent and popular work (Russell, 2002). His work, “Notions of Education” stressed the importance of child health, advised children to learn swimming and to maintain their fitness. Moreover, Locke noted that such activities as dance, swordplay and riding were essential for a gentleman (Alpman, 1972) and that education should be infused with game play (Binbaşıoğlu, 1982).

Jean Jacques Rousseau: in his work, Emile, the philosopher from Geneva discussed educational matters in regard to the principles of nature (Russell, 2002). In this work, which he wrote in (1762) Rousseau argued that individuals should learn from nature, human beings or objects (Perry et al., 1989), and expressed his notions concerning the education of children and teenagers (Binbasioglu, 1982). Rousseau held that children should be allowed to develop and learn according to their natural inclinations, but in Emile, this goal was achieved by a tutor who cunningly manipulated his pupil’s responses (Damrosch, 2007). The aforesaid education was termed “Natural education’’ of the public or “education which will create natural human beings’’ (Aytaç, 1980). Emile exercised early in the morning because he needed strength, and because a strong body was the basic requirement for a healthy soul. Running with bare feet, high jumping, and climbing walls and trees, Emile mastered such skills as jogging, swimming, stone throwing, archery and ball games. Rousseau demanded that every school would have a gymnasium or an area for training (Alpman, 1972).

Continued next post. Time to watch the Olympics!

The human word brain vs. the visual bird brain / Aye, yai, yai

A Blog by Robert Krulwich, 12/03/15

How a 5-Ounce Bird Stores 10,000 Maps in Its Head

Around now, as we begin December, the Clark’s nutcracker has, conservatively, 5,000 (and up to 20,000) treasure maps in its head. They’re accurate, detailed, and instantly retrievable.

It’s been burying seeds since August. It’s hidden so many (one study says almost 100,000 seeds) in the forest, meadows, and tree nooks that it can now fly up, look down, and see little x’s marking those spots —here, here, not there, but here—and do this for maybe a couple of miles around. It will remember these x’s for the next nine months.

This is an assumption based on how humans make maps. Is the bird using an aerial map (that covers several square miles) that it has composed from “little maps” that are based on the arrangement of a few objects on the ground, in 5,000-20,000 separate locations? Does it then transform this complex projection (from ground points to an aerial view) into a “graphic map” with x’s on it? This is where BEING LITERAL counts, if one is to understand how the bird thinks; that is, how it collects and processes information from the environment, and then arranges it in a useable form. Does the bird rely on a built-in Google Map app? 

Humans are not very good at “imagining” how other life forms function in relation to the environment. Are these maps at all, or simply images? In visual thinking, the image IS the content. Does the bird brain compare what it sees while searching for caches with an image that is embedded in its visual records (and is always available and “updated” as the environment changes) or is it calculating distance and location mathematically, using “trigonometry software” like a computer? Either way, it still needs an accurate “memory” of locations… which for the bird must be acquired through its senses – perhaps several senses are involved?  

How does it do it? / 32 Seeds a Minute

It starts in high summer, when whitebark pine trees produce seeds in their cones—ripe for plucking. Nutcrackers dash from tree to tree, inspect, and, with their sharp beaks, tear into the cones, pulling seeds out one by one. They work fast. One study clocked a nutcracker harvesting “32 seeds per minute.”

These seeds are not for eating. They’re for hiding. Like a squirrel or chipmunk, the nutcracker clumps them into pouches located, in the bird’s case, under the tongue. It’s very expandable … The pouch “can hold an average of 92.7 plus or minus 8.9 seeds,” wrote Stephen Vander Wall and Russell Balda. (Aye, yai yai!) Biologist Diana Tomback thinks it’s less, but one time she saw a (bigger than usual) nutcracker haul 150 seeds in its mouth. “He was a champ,” she told me.

Next, they land. Sometimes they peck little holes in the topsoil or under the leaf litter. Sometimes they leave seeds in nooks high up on trees. Most deposits have two or three seeds, so that by the time November comes around, a single bird has created 5,000 to 20,000 hiding places. They don’t stop until it gets too cold. “They are cache-aholics,” says Tomback.

When December comes—like right around now—the trees go bare and it’s time to switch from hide to seek mode. Nobody knows exactly how the birds manage this, but the best guess is that when a nutcracker digs its hole, it will notice two or three permanent objects at the site: an irregular rock, a bush, a tree stump. The objects, or markers, will be at different angles from the hiding place. (???)

Next, they measure. (How are they measuring? Do they use feet and inches or the metric system? A tape measure? A laser scanner? LOL) This seed cache, they note, “is a certain distance from object one, a certain distance from object two, a certain distance from object three,” says Tomback. “What they’re doing is triangulating. They’re kind of taking a photograph with their minds (brain) to find these objects” using (3) reference points

You can see from the video that “triangulation” is not what the researchers think it is!  

Psychologist Alan Kamil has a different view. He thinks the birds note the landmarks and remember not so much the distances, but the angleswhere one object is in relation to the others. (“The tree stump’s 80 degrees south of the rock.”) Aye, yai, yai!  These nutcrackers are doing geometry more than measuring. (OMG!) 

Yes, birds think in words; measure distances and angles, take notes, and identify “trees” as trees, “rocks” as rocks, and “do the math” (wrongly) just like psychologists. Another huge Asperger sigh…

Imagine that the bird is positioned where the theodolite sits on the survey table, (a tree branch) and (according to the researchers, is trying to remember) a point where it dug a hole and buried seeds.

Note that TWO points are needed for triangulation: point A and point B. This requires that the bird records data from two different positions in the landscape at a known distance from each other. But, even then, it’s not the “point” where a cache can be found that can be calculated, but the DISTANCE TO THE TREE (along dashed line) from the baseline. If the cache is in or below the tree, the bird can SEE where it is… 

If we assume that (what the authors really mean) is that the marker objects exist at  points A, B, and C, then why is there any need to “do the trig” or even make a map? The cache simply exists within the area defined by points A, B, C. On the ground these markers (an irregular rock, a bush, a tree stump) are not going to be more than a few inches to feet apart… a small area to search. And if the bird has an existing image of the area that includes the position of the buried seeds – easy, peasy! 

Does the bird actually need an accurate map based on distances and angles to find seeds, when it has established an enormous number (5,000 – 20,000) caches, or will a few visual landmarks get it “close enough” to rediscover a sufficient percentage of them to provide for survival? Does it actually “find” each and every one of the 100,000 seeds? (I’d like to see proof!) What about the ones that other animals discover and eat? What about those displaced by rain, snow, wind, erosion, tree limbs or whole trees falling down; leaf litter is hardly a “permanent” material! What happens when one or more markers and the seed location are buried under snow? How is that explained? 

To see what is involved in mapping go to: http://www.icsm.gov.au/mapping/surveying2.html

However they do it, when the snow falls and it’s time to eat, (they don’t eat during the rest of the year?) they’ll land at a site. “They will perch on a tree,” says Tomback, “on a low branch, [then light onto the ground, where] they pause, look around a bit, and they start digging, and in a few cases I’ll see them move slightly to the right or to the left and then come up again (??)”

She’s convinced that they’re remembering markers from summer or fall and using them to point to the X spot—and, “Lo and behold, these birds come up with their cracked seeds,” she says. “And it’s really pretty astounding.”

In the 1970s, Stephen Vander Wall ran a tricky little experiment. He shifted the markers at certain sites, so that instead of pointing to where the seeds actually were, they now pointed to where the seeds were not. OMG!

And the birds, as you’d expect if they were triangulating, went to the wrong place Note that this “experiment” was not conducted in the wild, but in artificially constructed conditions controlled by the “researchers”… who don’t understand triangulation… 

I think what they are thinking of is trisecting a triangle.

But at sites where he left the markers untouched, the birds got it right. That’s a clue that each of these birds has thousands of marker-specific snapshots in their heads that they use for months and months. When the spring comes and the birds have their babies, they continue to visit old sites to gather seeds until their chicks fledge. A “photographic” image (and images recorded by the brain) include the details needed for identification of what is within the frame of capture; the relationships between content details are “fixed” in the pattern. The bird does not need to “abstract” markers from the environment; everything is included in the image.

The mystery here, the deep mystery, is how do they manage to store so much data in their heads? I couldn’t possibly do what they do (I can’t even remember all ten digits in a phone number, so I’d be one very dead nutcracker in no time). Is their brain organized in some unique way? (!!!!!) 

Neurotypicals are perpetually amazed that other living things, which have been produced by the rigors of evolutionary selection over millions of years, could possibly possess functions and skills beyond those of an infantile domesticated social human.  

Is their brain plastic? Can it grow more neurons or more connections when it needs to? Chickadees are also food hiders, and they do grow bushier brains when they need to, expanding in the “remember this” season and contracting afterward. Do Clark’s nutcrackers do that? We don’t know.

Whatever it is they do, I want what they’ve got.

_____________________________________________________

 

 

 

When you finally realize “you’ve been had”

I remember exactly the moment when I realized that “I’d been had” – meaning, that the “story” I’d been told all my life about who I was and where I fit in the human universe was a lie.

tumblr_n3d04oEndZ1qzbyhko1_500

A young man I was dating had driven us to a resort in the mountains and we were having lunch outside on a deck overlooking a stream; we were in the “getting to know you” stage and he was easy to talk to. Very bright – a medical student – open, warm and chatty (and extremely OCD I later discovered.) He came from a medical family and seemed to be happy following his father into medicine. I was undiagnosed (Asperger) at the time (mid-twenties) and very much enjoying my life (I thought).

As he revealed his “story” I began to think out loud about my family, my school experiences, and my relationships. The official story went like this:

My grades at school were great; teachers liked me, my parents were happy, and I got all kinds of special treatment. But, — and the truth was suddenly apparent.

The special treatment I supposedly got was actually punishment: I was not allowed to participate at my intellectual level in class and I was excluded from extracurricular activities: my social awkwardness was used to hold participation hostage until I somehow “reformed” myself. I was effectively “benched” for being ahead of the class; not allowed to answer questions in class, to talk about anything, really. I sat with my books and papers open to exercises that we wouldn’t get to for weeks, filling in answers and erasing them, just for something to do. I gradually drifted off to my own, far more interesting world.

Of course the other kids saw this “reward-punishment” charade as special attention, and as kids do, followed the example set for them and piled on the hostility, which I didn’t even recognize as bullying. 

At home a different dynamic played out, but effectively with the same result. My brother was six years older, my mother’s “baby” and he was allowed to avoid anything he didn’t want to do by “being ill.” He needed help and attention; I didn’t. The excuse for “abandoning” me became a broken record in my mother’s mouth: “You have everything; you’re smart and pretty and life is so easy for you. My childhood was horrible: you don’t know what suffering is. You’re strong – you don’t need anything from other people, especially me. Don’t be a greedy cry-baby.”

“Go away” was the message, year in and year out. So, I did. I didn’t start out strong, but I became strong, because it was necessary. Being strong has its own perils. So does being pretty – I was excluded from social circles and events because the other girls wanted the boys to themselves.

On that lovely summer day in the mountains, I finally “got it.” My mother and brother and teachers and bosses, and even a few friends, had tricked my honest and trusting   Asperger brain by offering compliments that were meant to be express criticism and rejection. I was naïve and stupid in social terms, and had no clue that this was “how the human universe works.”

You’re wonderful; we hate you.

I wonder how many Asperger females identify with this experience?

On the other hand, this hurtful realization was utterly necessary to creating a life outside the prison of social expectations. It wasn’t easy to say “no” to the established status quo, especially for a young woman seeking meaningful work and an independent life and being criticized constantly for outlandish “unladylike” ambition.

It’s not women’s’ lack of abilities that keeps us down; a social agenda remains today that undermines intellect, by defining femininity itself as being a hyper-sexualized little girl; dumb, vulnerable and forever infantile. It’s very sad to see adult women, who are 30, 40, 50 years old, spending precious time and scarce money on diets, phony rejuvenation products, fashion fads, hideous make up, false hair, false breasts, false butts, and childish obsessions with novel “social media trends” while competing with their own daughters, like pitiful clowns hoping for a scrap of attention.

Anti-female policies are built into the social system at all levels. It’s time for women to understand that “You’ve been had.”

How America Lost Its Mind / Atlantic Magazine – Rant by a Neurotypical Person

Sept. 2017 Kurt Anderson

https://www.theatlantic.com/magazine/archive/2017/09/how-america-lost-its-mind/534231/

When did America become untethered from reality?

I first noticed our national lurch toward fantasy in 2004, after President George W. Bush’s political mastermind, Karl Rove, came up with the remarkable phrase reality-based community. People in “the reality-based community,” he told a reporter, “believe that solutions emerge from your judicious study of discernible reality … That’s not the way the world really works anymore.”

A year later, The Colbert Report went on the air. In the first few minutes of the first episode, Stephen Colbert, playing his right-wing-populist commentator character, performed a feature called “The Word.” His first selection: truthiness. “Now, I’m sure some of the ‘word police,’ the ‘wordinistas’ over at Webster’s, are gonna say, ‘Hey, that’s not a word!’ Well, anybody who knows me knows that I’m no fan of dictionaries or reference books. They’re elitist. Constantly telling us what is or isn’t true. Or what did or didn’t happen. Who’s Britannica to tell me the Panama Canal was finished in 1914? If I wanna say it happened in 1941, that’s my right. I don’t trust books—they’re all fact, no heart … Face it, folks, we are a divided nation … divided between those who think with their head and those who know with their heart … Because that’s where the truth comes from, ladies and gentlemen—the gut.”

Whoa, yes, I thought: exactly. America had changed since I was young, when truthiness and reality-based community wouldn’t have made any sense as jokes. For all the fun, and all the many salutary effects of the 1960s—the main decade of my childhood—I saw that those years had also been the big-bang moment for truthiness. And if the ’60s amounted to a national nervous breakdown, we are probably mistaken to consider ourselves over it. The 1960s were Hell for an Asperger: I was constantly berated and attacked for being a “Fact Nazi” by people who were truly manifesting a “neoteny psychosis”. 

Each of us is on a spectrum somewhere between the poles of rational and irrational.

OMG! This guy is nuts; guilty of the neurotypical nonsense he’s complaining about! Can we PLEASE stop using “spectrum” to “mush together” mental processes (and everything else) into an undifferentiated wad of goo that somehow spans the gulf between imaginary “polarized, black and white” neurotypical stupidity?  

We all have hunches we can’t prove and superstitions that make no sense. Some of my best friends are very religious, and others believe in dubious conspiracy theories. What’s problematic is going overboard—letting the subjective entirely override the objective; thinking and acting as if opinions and feelings are just as true as facts. The American experiment, the original embodiment of the great Enlightenment idea of intellectual freedom, whereby every individual is welcome to believe anything she wishes, has metastasized out of control. OMG! What an idiotic neurotypical “interpretation” of “Enlightened” intellectual freedom. 

From the start, our ultra-individualism (this did not exist in foundational colonies, which were the opposite: conformist to narrow religious dogma to the extreme) was attached to epic dreams, sometimes epic fantasies—every American one of God’s chosen people building a custom-made utopia, all of us free to reinvent ourselves by imagination and will. In America nowadays, those more exciting parts of the Enlightenment idea have swamped the sober, rational, empirical parts. Little by little for centuries, then more and more and faster and faster during the past half century, we Americans have given ourselves over to all kinds of magical thinking, anything-goes relativism, and belief in fanciful explanation—small and large fantasies that console or thrill or terrify us. And most of us haven’t realized how far-reaching our strange new normal has become. (OMG! What a garbled string of “factoids” strung together as nonsense. America was founded by “magical thinkers” – highly religious crackpots drummed out of Europe by people fed up with their insane hatred of happiness as a worthy experience. The “rational” element was always a finite minority of self-interested “gentleman” who wanted the riches and rights reserved for the Aristocracy to be available to THEIR CLASS.  

Much more than the other billion or so people in the developed world, we Americans believe—really believe—in the supernatural and the miraculous, in Satan on Earth, in reports of recent trips to and from heaven, and in a story of life’s instantaneous creation several thousand years ago.

If the 1960s amounted to a national nervous breakdown, we are probably mistaken to consider ourselves over it.

We believe that the government and its co-conspirators are hiding all sorts of monstrous and shocking truths from us, concerning assassinations, extraterrestrials, the genesis of aids, the 9/11 attacks, the dangers of vaccines, and so much more. And this was all true before we became familiar with the terms post-factual and post-truth, before we elected a president with an astoundingly open mind about conspiracy theories, what’s true and what’s false, the nature of reality. We have passed through the looking glass and down the rabbit hole. America has mutated into Fantasyland.

How widespread is this promiscuous devotion to the untrue? How many Americans now inhabit alternate realities? Any given survey of beliefs is only a sketch of what people in general really think. But reams of survey research from the past 20 years reveal a rough, useful census of American credulity and delusion. By my reckoning, the solidly reality-based are a minority, maybe a third of us but almost certainly fewer than half. Wildly optimistic; and PLEASE don’t include yourself in the reality-based minority. LOL)

Only a third of us, for instance, don’t believe that the tale of creation in Genesis is the word of God. Only a third strongly disbelieve in telepathy and ghosts. Two-thirds of Americans believe that “angels and demons are active in the world.” More than half say they’re absolutely certain heaven exists, and just as many are sure of the existence of a personal God—not a vague force or universal spirit or higher power, but some guy. A third of us believe not only that global warming is no big deal but that it’s a hoax perpetrated by scientists, the government, and journalists. A third believe that our earliest ancestors were humans just like us; that the government has, in league with the pharmaceutical industry, hidden evidence of natural cancer cures; that extraterrestrials have visited or are visiting Earth. Almost a quarter believe that vaccines cause autism, and that Donald Trump won the popular vote in 2016. A quarter believe that our previous president maybe or definitely was (or is?) the anti-Christ. According to a survey by Public Policy Polling, 15 percent believe that the “media or the government adds secret mind-controlling technology to television broadcast signals,” and another 15 percent think that’s possible. A quarter of Americans believe in witches. Remarkably, the same fraction, or maybe less, believes that the Bible consists mainly of legends and fables—the same proportion that believes U.S. officials were complicit in the 9/11 attacks.

When I say that a third believe X and a quarter believe Y, it’s important to understand that those are different thirds and quarters of the population. Of course, various fantasy constituencies overlap and feed one another—for instance, belief in extraterrestrial visitation and abduction can lead to belief in vast government cover-ups, which can lead to belief in still more wide-ranging plots and cabals, which can jibe with a belief in an impending Armageddon.

None of this “listing of crazy beliefs” cancels out (by the neurotypical “matter-antimatter” principle of magic word opposition) or precludes ACTUAL conspiracies, predation, cover ups or exploitation by corporations and lobbyists, government agencies, special interests, the “Religion Industry” and political parties for misuse of power.

Why are we like this?

The short answer is because we’re Americans—because being American means we can believe anything we want; that our beliefs are equal or superior to anyone else’s, experts be damned. Once people commit to that approach, the world turns inside out, and no cause-and-effect connection is fixed. The credible becomes incredible and the incredible credible. Typical neurotypical defeatism when faced with a tough question, because “word magic” is the only option for problem-solving, and word magic fails when confronting fact. 

The word mainstream has recently become a pejorative, shorthand for bias, lies, oppression by the elites. Yet the institutions and forces that once kept us from indulging the flagrantly untrue or absurd—media, academia, government, corporate America, professional associations, respectable opinion in the aggregate—have enabled and encouraged every species of fantasy over the past few decades. How naive! It was these very institutions that “lied about reality” (everything is perfect; trust us) while specializing in unethical and immoral behavior at all levels of power, within American government, and in foreign policy.   

A senior physician at one of America’s most prestigious university hospitals promotes “miracle cures” on his daily TV show. (The medical industry has always done this) Cable channels air documentaries treating mermaids, monsters, ghosts, and angels as real. When a political-science professor attacks the idea “that there is some ‘public’ that shares a notion of reality, a concept of reason, and a set of criteria by which claims to reason and rationality are judged,” colleagues just nod and grant tenure. The old fringes have been folded into the new center. The irrational has become respectable and often unstoppable. This is the normal neurotypical condition, and has been, for thousands of years. 

Our whole social environment and each of its overlapping parts (the delusion of “parts” again, instead of integrated systems of activity)cultural, religious, political, intellectual, psychological—have become conducive to spectacular fallacy and truthiness and make-believe. There are many slippery slopes, leading in various directions to other exciting nonsense. During the past several decades, those naturally slippery slopes have been turned into a colossal and permanent complex of interconnected, crisscrossing bobsled tracks, which Donald Trump slid down right into the White House. Oh please! How naïve: this is what passes for analysis? Americans traditionally resort to the knee-jerk superstition that “evil” is an eruption  of “chaos” into a perfectly organized neurotypical universe, the existence of which is a fantastical irrational construction; a pathetic denial of insanity within.   

American moxie has always come in two types. We have our wilder, faster, looser side: We’re overexcited gamblers with a weakness for stories too good to be true. But we also have the virtues embodied by the Puritans and their secular descendants: steadiness, hard work, frugality, sobriety, and common sense. (And an arrogant, ugly do-gooder, busybody, know-it-all obsession with abusing other humans.)  A propensity to dream impossible dreams is like other powerful tendencies—okay when kept in check. For most of our history, the impulses existed in a rough balance, a dynamic equilibrium between fantasy and reality, mania and moderation, credulity and skepticism. Total fantasy: and note the continuing limitation of neurotypical addiction to polarized thinking: Either / or behavior; a tug of war between the devil and angel in your soul; black or white; one extreme or the other; yatta yatta! It’s just plain infantile… 

The great unbalancing and descent into full Fantasyland was the product of two momentous changes. The first was a profound shift in thinking that swelled up in the ’60s; since then, Americans have had a new rule written into their mental operating systems: Do your own thing, find your own reality, it’s all relative. A blossoming of   neoteny in Americans.

The second change was the onset of the new era of information. Digital technology empowers real-seeming fictions of the ideological and religious and scientific kinds. Among the web’s 1 billion sites, believers in anything and everything can find thousands of fellow fantasists, with collages of facts and “facts” to support them. Before the internet, crackpots were mostly isolated, and surely had a harder time remaining convinced of their alternate realities. (Single crackpots are rarely effective; it’s those who gather together in the thousands – or millions – who are dangerous). Opinions are all over the airwaves and the web, just like actual news. Now all of the fantasies look real.

Our shocking Trump moment is just the ultimate expression of mind-sets that have made America exceptional for its entire history. Hmmm… magical thinking; attributing an election outcome to some “disturbance in the ether” caused by “ghostly persons” that reach out from the past to “f— things up”. It couldn’t be that elections are simply a mirage? A fool’s drama of people casting meaningless ballots in a charade of democracy, which is in real terms, a “slug fest” for power and control by opposing elites? 

Today, each of us is freer than ever to custom-make reality, to believe whatever and pretend to be whoever we wish. (From what external entity does this mysterious permission arise?) Which makes all the lines between actual and fictional blur and disappear more easily. Truth in general becomes flexible, personal, subjective. And we like this new ultra-freedom, insist on it, even as we fear and loathe the ways so many of our wrongheaded fellow Americans use it. Us and them duality again; I’m right-headed, you are wrong-headed.

Treating real life as fantasy and vice versa, and taking preposterous ideas seriously, is not unique to Americans. But we are the global crucible and epicenter. (Rather arrogant assumption. We always have to be the Best!) We invented the fantasy-industrial complex; almost nowhere outside poor or otherwise miserable countries are flamboyant supernatural beliefs so central to the identities of so many people. This is American exceptionalism in the 21st century. The country has always been a one-of-a-kind place. But our singularity is different now. We’re still rich and free, still more influential and powerful than any other nation, practically a synonym for developed country. But our drift toward credulity, toward doing our own thing, toward denying facts and having an altogether uncertain grip on reality, has overwhelmed our other exceptional national traits and turned us into a less developed country. (Neurotypical Blah, blah, blah! This guy is certainly in love with meaningless verbiage!) 

People see our shocking Trump moment—this post-truth, “alternative facts” moment—as some inexplicable and crazy new American phenomenon. (No, only deluded control freaks, who think that their version of how reality “ought to be” matches the supernatural template of “absolute best version” of reality, that they thoroughly believe exists, but has never existed, except in their imagination, would think this way.) But what’s happening is just the ultimate extrapolation and expression of mind-sets that have made America exceptional for its entire history.

America was created by true believers and passionate dreamers, and by hucksters and their suckers, which made America successful—but also by a people uniquely susceptible to fantasy, as epitomized by everything from Salem’s hunting witches to Joseph Smith’s creating Mormonism, from P. T. Barnum to speaking in tongues, from Hollywood to Scientology to conspiracy theories, from Walt Disney to Billy Graham to Ronald Reagan to Oprah Winfrey to Trump. In other words: Mix epic individualism with extreme religion; mix show business with everything else; let all that ferment for a few centuries; then run it through the anything-goes ’60s and the internet age. (What an idiotic string of nonsense) The result is the America we inhabit today, with reality and fantasy weirdly and dangerously blurred and commingled.

The 1960s and the Beginning of the End of Reason

I don’t regret or disapprove of many of the ways the ’60s permanently reordered American society and culture. It’s just that along with the familiar benefits, there have been unreckoned costs.

In 1962, people started referring to “hippies,” the Beatles had their first hit, Ken Kesey published One Flew Over the Cuckoo’s Nest, and the Harvard psychology lecturer Timothy Leary was handing out psilocybin and LSD to grad students. And three hours south of San Francisco, on the heavenly stretch of coastal cliffs known as Big Sur, a pair of young Stanford psychology graduates founded a school and think tank they named after a small American Indian tribe that had lived on the grounds long before. “In 1968,” one of its founding figures recalled four decades later,

“Esalen was the center of the cyclone of the youth rebellion. It was one of the central places, like Mecca for the Islamic culture. (YIKES! How typically arrogant!) Esalen was a pilgrimage center for hundreds and thousands of youth interested in some sense of transcendence, breakthrough consciousness, LSD, the sexual revolution, encounter, being sensitive, finding your body, yoga—all of these things were at first filtered into the culture through Esalen. By 1966, ’67, and ’68, Esalen was making a world impact.”

This is not overstatement. Essentially everything that became known as New Age was invented, developed, or popularized at the Esalen Institute. Esalen is a mother church of a new American religion for people who think they don’t like churches or religions but who still want to believe in the supernatural. The institute wholly reinvented psychology, medicine, and philosophy, driven by a suspicion of science and reason and an embrace of magical thinking (also: massage, hot baths, sex, and sex in hot baths). It was a headquarters for a new religion of no religion, and for “science” containing next to no science. The idea was to be radically tolerant of therapeutic approaches and understandings of reality, especially if they came from Asian traditions or from American Indian or other shamanistic traditions. Invisible energies, past lives, astral projection, whatever—the more exotic and wondrous and unfalsifiable, the better. REALLY? 

Not long before Esalen was founded, one of its co-founders, Dick Price, had suffered a mental breakdown and been involuntarily committed to a private psychiatric hospital for a year. His new institute embraced the radical notion that psychosis and other mental illnesses were labels imposed by the straight world on eccentrics and visionaries, that they were primarily tools of coercion and control. This was the big idea behind One Flew Over the Cuckoo’s Nest, of course. And within the psychiatric profession itself this idea had two influential proponents, who each published unorthodox manifestos at the beginning of the decade—R. D. Laing (The Divided Self) and Thomas Szasz (The Myth of Mental Illness). “Madness,” Laing wrote when Esalen was new, “is potentially liberation and renewal.” Esalen’s founders were big Laing fans, and the institute became a hotbed for the idea that insanity was just an alternative way of perceiving reality. Again, this notion of “listing” fragmental factoids as a way of “canceling out by magic word” any possibility of fact, truth, significant connection, importance, results, consequences, or understandable outcomes in human affairs, which might follow logical paths or patterns, demonstrates the neurotypical inability to “think” beyond infantile polar opposition of good and evil as presented in Sunday School lessons. 

These influential critiques helped make popular and respectable the idea that much of science is a sinister scheme concocted by a despotic conspiracy to oppress people. Mental illness, both Szasz and Laing said, is “a theory not a fact.” This is now the universal bottom-line argument for anyone—from creationists to climate-change deniers to anti-vaccine hysterics—who prefers to disregard science in favor of his own beliefs. How infantile: how Sunday School! Judgements that other people are “mistaken” without any acknowledgement that “my illusions and delusions” are contributing to “the mess”, or that I can possibly be the object of my irrational “superiority”. 

You know how young people always think the universe revolves around them, as if they’re the only ones who really get it? And how before their frontal lobes, the neural seat of reason and rationality, are fully wired, they can be especially prone to fantasy? (Dumb inaccurate pop-science clichés) In the ’60s, the universe cooperated: It did seem to revolve around young people, affirming their adolescent self-regard, making their fantasies of importance feel real and their fantasies of instant transformation and revolution feel plausible. Practically overnight, America turned its full attention to the young and everything they believed and imagined and wished.

If 1962 was when the decade really got going, 1969 was the year the new doctrines and their gravity were definitively cataloged by the grown-ups. Reason and rationality were over. The countercultural effusions were freaking out the old guard, including religious people who couldn’t quite see that yet another Great Awakening was under way in America, heaving up a new religion of believers who “have no option but to follow the road until they reach the Holy City … that lies beyond the technocracy … the New Jerusalem.” That line is from The Making of a Counter Culture: Reflections on the Technocratic Society and Its Youthful Opposition, published three weeks after Woodstock, in the summer of 1969. Its author was Theodore Roszak, age 35, a Bay Area professor who thereby coined the word counterculture. Roszak spends 270 pages glorying in the younger generation’s “brave” rejection of expertise and “all that our culture values as ‘reason’ and ‘reality.’ ” (Note the scare quotes.) So-called experts, after all, are “on the payroll of the state and/or corporate structure.” A chapter called “The Myth of Objective Consciousness” argues that science is really just a state religion. To create “a new culture in which the non-intellective capacities … become the arbiters of the good [and] the true,” he writes, “nothing less is required than the subversion of the scientific world view, with its entrenched commitment to an egocentric and cerebral mode of consciousness.” He welcomes the “radical rejection of science and technological values.” Note the belief in the POWER OF WORDS to form, change and dictate “reality”. This irrational delusion is due to a neurotypical dependence on the principles of magic.) 

As 1969 turned to 1970, a 41-year-old Yale Law School professor was finishing his book about the new youth counterculture. Charles Reich was a former Supreme Court clerk now tenured at one of ultra-rationalism’s American headquarters. But hanging with the young people had led him to a midlife epiphany and apostasy. In 1966, he had started teaching an undergraduate seminar called “The Individual in America,” for which he assigned fiction by Kesey and Norman Mailer. He decided to spend the next summer, the Summer of Love, in Berkeley. On the road back to New Haven, he had his Pauline conversion to the kids’ values. His class at Yale became hugely popular; at its peak, 600 students were enrolled. In 1970, The Greening of America became The New York Times’ best-selling book (as well as a much-read 70-page New Yorker excerpt), and remained on the list for most of a year.

Previous two paragraphs and actually, the rest of the article:

Social blah, blah, blah which never interested the average American, but was epidemic in upper and upper middle class Americans, fixated on their pretentions to superior intellectual and social status. There was widespread denigration of “blue collar” working Americans by these classes at the time; it continues today.  

Messages from the Unconscious / Yes, it happens

“There is no way that as a human being, you won’t disturb the Earth.”

I have related in previous posts, how my “mind works” (and everyone’s does, actually) but you have to listen for the products of the unconscious, in order to make them conscious. I enjoy sleep; it’s an active state of rest, refreshment and dreams. Powerful thinking goes on; a type of thinking much older than conscious verbal thought. A direct link to collective memory – evolutionary memory. A vast reservoir that is encoded along with all the myriad instructions that build a human body within a woman’s body – and after birth must be nurtured in order to grow the infant into an adult form. We call the code DNA, but then ignore that the code is useless unless it finds healthy expression as a living creature, which is not an automatic guaranteed outcome.  

Traditional so-called primitive cultures keep the unconscious conduit open; sometimes through initiation rituals and physical breakdown of the conscious / unconscious barrier or by use of psychoactive concoctions or physical stress; through dream imagery interpretation and the activities of shamans, who act as both guides and “librarians” -individuals, who thanks to their personality – brain type, can search the collective memory banks to “correct” whatever ails you or the community. The source of “trouble” is held to be a deviation from paths and patterns worked out by natural processes – often due to intentional human interference.  

If I’m lucky, a phrase or idea may linger from the night’s brain activity: it may become a stimulus for word-based thinking, as if a basin of water had been left to fill overnight, and that on waking a particular phrase allows the stored up potential of unconscious activity to be free to “do work” in the waking world. Geologic processes and events sometimes supply the images for this dynamic relationship between what modern social people believe to be a “good” realm of conscious social word-thought and the “evil” realm of unconscious “trash and sewerage” – a tragic religious-psychiatric condemnation that has been imposed on a healthy system of human sensory experience, visual processing and creativity directed toward a goal of survival and reproduction of our specific “version” of animal life.

Unconscious processing is a powerful legacy of animal evolution that we have relegated to a sewer system, a septic tank, a dark region of monsters, dreadful impulses and dangers.

Myths from many cultures include Hell, the underworld, limbo or an after life in their scheme of things; some describe “that place” as a source of knowledge that is perilous to enter, but worth it for what can be found there. The unconscious experience is “outside time” and therefore seen as a place of reliable prophecy; an attractive lure to those modern humans who desire to manipulate, dominate and control man and nature – hence the relentless and blinding quest for “magic” as the means to “cheat” the Laws of Nature. But it is the unconscious content of the human animal that composes the owner’s manual for “How to Operate and Maintain a Bipedal Ape”.

We can see that during the long the course of the “evolution” of bipedal apes, what we call “unconscious processes” – mainly visual thinking, sensory thinking, acquisition of energy and interaction with the environment, and the task of growing and maintaining an animal body, were simply taken care of by the brain – and still are. Our pejorative use of the words “instinct and instinctual” knowledge and functions as something inferior, which “we” have left behind, is a nonsensical conclusion; an illusion produced  by the supposedly “superior” (and demonstrably less intelligent) “conscious verbal function” that is embraced, cultivated and worshipped by modern humans as a “God”.

Why would I state that the “unconscious” animal brain is more intelligent than the modern verbal function as a guidance system for human survival?

As an Asperger who relies on the unconscious as the “go to” source for patterns, systems, connections, networks and explanations for “how the universe works” it is obvious that nature itself provides the “master templates” for creating and implementing technological invention and innovation. Homo sapiens has “discovered” these templates (Laws of Physics) by means of mathematics, and the nature of these “languages of physical reality” remains a bit mysterious.

The problem arises with the assumption that the manifestation of technical ideas and products as solutions to the painful drudgery of manual labor is believed to confer intelligence of a truly different type: Wisdom – the ability to “forecast” consequences that potentially result from one’s actions, and the ability to modify present action accordingly. This is an almost impossible task for the human brain; it’s why we invent or seek out Big Parental Figures; employ statistical magic and other contrived nonsense, and “divine the future” in archaic religious texts, simultaneously, without distinction to common sense; we supply our own superstitious rules and clumsy structures to compensate for our utter lack of critical foresight and judgement.

Several notions help clarify this predicament.

1. “Nature” has done the work of “foresight” for us: we have access to knowledge stored in “instinct / unconscious content” and in the conscious apprehension of “how the environment works” through trial and error manipulation of real objects and materials and more recently by means of “abstract codes” and computing power by which we believe we can decipher “the magic universe” of human childhood.

That is, foresight is not “located” in seeing the “future” (which doesn’t exist in concrete form ) but by understanding the “eternal present”. These patterns are not mystical, magical or supernatural.

2. The deceptive mirage of “word thinking” goes unrecognized. The lure of being freed from the Laws of Nature is great! Word thinking is not “tied to” actual reality – it’s usefulness and value is in making propositions that owe no allegiance to the limits and boundaries of the “real world”. Word language CAN lead to rapid communication of information and dissemination of  useful concepts, but! There is no guarantee that this “information” is accurate – most ideas are created to provide for the motivation and justification of time and energy being expended in the pursuit of inflicting injury and suffering on other humans, and the control/exploitation of resources, plants,  animals and other life forms. This activity will never produce A Happy Ending. 

In fact, word thinking leads to the illusion of the reality and primacy of a supernatural domain, in which magic is the operating system. Predatory humans give themselves permission to dominate the environment via verbal constructs, whose origin is assigned to, and justified by, this imaginary supernatural realm. Social dominance  “for personal gain and pleasure” does not correspond to the “dominant role” in nature, which comes with great risk and responsibility and heavy consequences for the dominant individual. In humans, the goal in attaining dominance is a “free ride” on the backs of inferior beings. 

3. Oh boy! Screw nature: I’m in control! Bring on the spells, rituals, magic symbols, secret handshakes, rattles and drums; the abject obedience of “lesser beings” to my dictates. This is where social humans are today: technically powerful, abysmally ignorant of the consequences of our actions. We have cut ourselves off from access to the user’s manual that is included free with every brain.

4. Instead, we have created a delusional and self-destructive hatred and fear of a vital evolutionary legacy; unconscious thinking has been selected and slandered by certain predatory humans as the “cause” of pathologic behavior: mental illness, violence, depravity, abuse, “disobedience to social control” and to the “supernatural regime” of human social reality, when in fact, much of human “bad behavior” can be traced directly to the steeply hierarchical structures that dominate modern humans. From the top down (from tyrants, Pharaohs and other psycho-sociopaths, to the ranks of those who are their “prey”) it is the distortion of manmade supernatural “order” as the original and absolute truth of human existence that prevents the healthy growth and sanity of actual human beings. Much behavior that is destructive, abusive, cruel and irrational on the part of Homo sapiens is inevitable, given the abnormal, destructive and “killer” stresses built into modern social environments.