Jodorowsky Interview / ASD types listen to this! Updated

“Leap into the void”

This type of perception may seem to come out of left-field for Asperger types; but listen with your unconscious sensory abilities and you may be surprised. Programming not only ‘happens” to NTs, but definitely, to us. We are defined by the DSM-5 symptom list – and all the other BS lists, surveys, studies, anecdotes, and media descriptions that are being constructed; these “stories” dictate who we are supposed to be in the family and in society.  We cannot help but have this propaganda influence us as infants and children; what is never revealed is that we can de-program ourselves by using our Asperger traits, talents and intellect to create an authentic human being. We don’t have to limit ourselves to conform to the NT “psycho-social” prison of labels that condemns us.   

“Scientific thought and the miraculous unconscious are two waves in the same ocean.”

Alejandro Jodorowsky

I’m not promoting “specific” opinions or conclusions in the Jodorowsky interviews; (there are many to enjoy) but he is a great relief from the dogmatic and “ungenerous” beliefs and thinking in American psychology. His point about programming is especially important; Asperger /Autistic types must “heal” ourselves by creating and claiming our “authentic way of being” in the world.

His comments about family secrets and autism are important: The family tree  “brands us, possesses us, like voodoo.” Is this not another way to look at epigenetics? 

“The child’s mind is not the type of mind we adults possess. If we call our type of mind the conscious type, that of the child is an unconscious mind. Now an unconscious mind does not mean an inferior mind. An unconscious mind can be full of intelligence. One will find this type of intelligence in every being, and every insect has it.” Maria Montessori

Advertisements

Simple Breakdown / How the Brain Processes Information

https://www.labs.hpe.com/next-next/brain

In 2008, the U.S. Defense Advanced Research Projects Agency issued a challenge to researchers: Create a sophisticated, shoebox-size system that incorporates billions of transistors, weighs about three pounds, and requires a fraction of the energy needed by current computers. Basically, a brain in a box.

Although neuroscience has made important strides in recent years, the inner workings of the brain are still largely a mystery. “So little is really understood about the hardware of the brain—the neurons and their interconnections, and the algorithms that run on top of them—that today, anyone who claims to have built ‘a brain-like computer’ is laughable,” says Stan Williams, a research fellow at Hewlett Packard Labs.

Programs mirror human logic, but they don’t mirror intuitive thought.”

Rich Friedrich, Hewlett Packard Labs

A caveat from HP Labs (super website) regarding the analogy that the human brain like a computer processor. 

________________________________

We have to start somewhere!

eLearning Design and Development

By Christopher Pappas,  November 11, 2016

The brain is often likened to a processor. A complex computing machine that takes raw data and turns it into thoughts, memories, and cognitions. However, it has its limits, and Instructional Designers must know the boundaries before they can create meaningful eLearning courses. In this article, I’ll explore how the brain works, from its basic biological and memory functions to its ability to process information. I’ll also share 3 tips to help you create an eLearning course design that facilitates knowledge absorption and assimilation.

Information Processing Basics: A Guide For Instructional Designers

The brain is a wondrous thing. It transforms letters, numbers, and images into meaningful data that governs every aspect of our lives. Neural pathways spark and new ideas meet with the old to form complex schematic structures. But one of the most miraculous tasks it tackles is learning. As eLearning professionals, we must understand how information processing takes place in order to create effective eLearning experiences.

Brain Biology / The brain consists of many different structures, and the cortex encases all of them. The cortex is the outermost shell of the brain that takes care of complex thinking abilities. For example, memory, language, spatial awareness, and even personality traits. The inner regions of the brain control the most primitive aspects of human nature, such as our base impulses, fears, emotions, and our subconscious. The brain also houses a “subcortex,” which connects directly to the cortex. As such, it’s able to transmit and process information. (A cliché description of “primitive, subconscious”)

The Human Memory

Now that we’ve briefly explored the physical makeup of the brain, let’s delve into one of its most vital functions: memory. After all, memory is crucial for eLearning. If online learners aren’t able to remember the information, then all is for naught. We usually don’t give memory much attention, as it’s an automatic process. Every event, no matter how small, passes through the gates of our memory without us even noticing. However, most of the occurrences are just passing through and never take up permanent residence. There are three types of memory that Instructional Designers should be aware of:

1. Sensory Memory 

When our senses are triggered by a stimulus, our brains briefly store the information. For example, we smell freshly baked bread and can only remember its scent for a few seconds before it vanishes. Even though the bread is no longer in front of us, our mind’s still hold onto its impression for a short period. The brain then has the option to process it through the memory banks or forget about it. In eLearning, sensory memory is triggered by a visually compelling image, background music, or any other element that utilizes the senses.

2. Short-Term Memory

A process that falls under the purview of working memory, which temporarily stores information when it is triggered by stimuli. Short-term memory can only hold a maximum of 7 items at one time. It also has a time limit, which is usually between 10 seconds to a minute.

3. Long-Term Memory

After passing through the short-term memory, relevant information is moved to long-term storage. At this stage, the brain is less likely to forget important details. However, even the long-term memory can diminish over time if we don’t refresh our knowledge.

Information Processing Stages

There are a number of Information Processing theories and models. However, many suggest that the learning process involves three key stages:

Stage 1: Input / The brain is exposed to a stimuli, at which point it analyzes and evaluates the information. For example, the online learner reads a passage and determines whether it’s worth remembering.

Stage 2: Storage / Our brains store the information for later use. It also adds it to our mental schema and encodes it. If the information is not reinforced, the brain may simply forget it over time.

Stage 3: Output / The brain decides what it’s going to do with the information and how it will react to the stimulus. For example, after reading the passage, the individual uses the information they learned to overcome a challenge.

Simple! The question is, How do specific human brains handle these processing tasks? Psychologists would have us believe that there is only ONE way this ought to be accomplished; their way. Bull Shit.

 

Philosophy of Childhood / Stanford

I’m presenting this as a review of where many of our ideas about children, childhood, and “who has rights and who doesn’t” originate – in human thought and ideas (brains), that is, in consequence of poor reasoning, prejudice, personal bias, and thoughtful consideration; by means of accurate and faulty observation, careless assumptions and even (rarely) by clever insight, and not in universal law, in a pre-existing  supernatural realm or in a realm of magical authority.

What we see again, is the lack of coherence between modern Western social-psychological-cultural theory and biological reality. 

https://plato.stanford.edu/entries/childhood/

From: Stanford Encyclopedia of Philosophy

The Philosophy of Childhood

The philosophy of childhood has recently come to be recognized as an area of inquiry analogous to the philosophy of science, the philosophy of history, the philosophy of religion, and the many other “philosophy of” subjects that are already considered legitimate areas of philosophical study. In addition, philosophical study of related topics (such as parental rights, duties and responsibilities) has flourished in recent years. The philosophy of childhood takes up philosophically interesting questions about childhood, changing conceptions over time about childhood and attitudes toward children; theories of cognitive and moral development; children’s interests and children’s rights, the goods of childhood; children and autonomy; the moral status of children and the place of children in society. As an academic subject, the philosophy of childhood has sometimes been included within the philosophy of education (e.g., Siegel, 2009). Recently, however, philosophers have begun to offer college and university courses specifically in the philosophy of childhood. And philosophical literature on childhood, parenting and families is increasing in both quantity and quality.

 1. What is a Child?

Almost single-handedly, Philippe Ariès, in his influential book, Centuries of Childhood (Ariès, 1962), made the reading public aware that conceptions of childhood have varied across the centuries. The very notion of a child, we now realize, is both historically and culturally conditioned. But exactly how the conception of childhood has changed historically and how conceptions differ across cultures is a matter of scholarly controversy and philosophical interest (see Kennedy, 2006). Thus Ariès argued, partly on the evidence of depictions of infants in medieval art, that the medievals thought of children as simply “little adults.” Shulamith Shahar (1990), by contrast, finds evidence that some medieval thinkers understood childhood to be divided into fairly well-defined stages. And, whereas Piaget claims that his subjects, Swiss children in the first half of the 20th Century, were animistic in their thinking (Piaget, 1929), Margaret Mead (1967) presents evidence that Pacific island children were not.

One reason for being skeptical about any claim of radical discontinuity—at least in Western conceptions of childhood—arises from the fact that, even today, the dominant view of children embodies what we might call a broadly “Aristotelian conception” of childhood. According to Aristotle, there are four sorts of causality, one of which is Final causality and another is Formal Causality. Aristotle thinks of the Final Cause of a living organism as the function that organism normally performs when it reaches maturity. He thinks of the Formal Cause of the organism as the form or structure it normally has in maturity, where that form or structure is thought to enable the organism to perform its functions well. According to this conception, a human child is an immature specimen of the organism type, human, which, by nature, has the potentiality to develop into a mature specimen with the structure, form, and function of a normal or standard adult. 

Many adults today have this broadly Aristotelian conception of childhood without having actually read any of Aristotle. It informs their understanding of their own relationship toward the children around them. Thus they consider the fundamental responsibility they bear toward their children to be the obligation to provide the kind of supportive environment those children need to develop into normal adults, with the biological and psychological structures in place needed to perform the functions we assume that normal, standard adults can perform.

Two modifications of this Aristotelian conception have been particularly influential in the last century and a half. One is the 19th century idea that ontogeny recapitulates phylogeny (Gould, 1977), that is, that the development of an individual recapitulates the history and evolutionary development of the race, or species (Spock, 1968, 229). This idea is prominent in Freud (1950) and in the early writings of Jean Piaget (see, e.g. Piaget, 1933). Piaget, however, sought in his later writings to explain the phenomenon of recapitulation by appeal to general principles of structural change in cognitive development (see, e.g., Piaget, 1968, 27).

The other modification is the idea that development takes places in age-related stages of clearly identifiable structural change. This idea can be traced back to ancient thinkers, for example the Stoics (Turner and Matthews, 1998, 49). Stage theory is to be found in various medieval writers (Shahar, 1990, 21–31) and, in the modern period, most prominently in Jean-Jacques Rousseau’s highly influential work, Emile (1979). But it is Piaget who first developed a highly sophisticated version of stage theory and made it the dominant paradigm for conceiving childhood in the latter part of the 20th Century (see, e.g., Piaget, 1971).

Matthews (2008, 2009), argues that a Piagetian-type stage theory of development tends to support a “deficit conception” of childhood, according to which the nature of the child is understood primarily as a configuration of deficits—missing capacities that normal adults have but children lack. This conception, he argues, ignores or undervalues the fact that children are, for example, better able to learn a second language, or paint an aesthetically worthwhile picture, or conceive a philosophically interesting question, than those same children will likely be able to do as adults. Moreover, it restricts the range and value of relationships adults think they can have with their children.

Broadly Aristotelian conceptions of childhood can have two further problematic features. They may deflect attention away from thinking about children with disabilities in favour of theorizing solely about normally developing children (see Carlson 2010), and they may distract philosophers from attending to the goods of childhood when they think about the responsibilities adults have towards the children in their care, encouraging focus only on care required to ensure that children develop adult capacities.

How childhood is conceived is crucial for almost all the philosophically interesting questions about children. It is also crucial for questions about what should be the legal status of children in society, as well as for the study of children in psychology, anthropology, sociology, and many other fields.

2. Theories of Cognitive Development

Any well-worked out epistemology will provide at least the materials for a theory of cognitive development in childhood. Thus according to René Descartes a clear and distinct knowledge of the world can be constructed from resources innate to the human mind (Descartes, PW, 131). John Locke, by contrast, maintains that the human mind begins as a “white paper, void of all characters, without any ideas” (Locke, EHC, 121). On this view all the “materials of reason and knowledge” come from experience. Locke’s denial of the doctrine of innate ideas was, no doubt, directed specifically at Descartes and the Cartesians. But it also implies a rejection of the Platonic doctrine that learning is a recollection of previously known Forms. Few theorists of cognitive development today find either the extreme empiricism of Locke or the strong innatism of Plato or Descartes completely acceptable.

Behaviorism has offered recent theorists of cognitive development a way to be strongly empiricist without appealing to Locke’s inner theater of the mind. The behaviorist program was, however, dealt a major setback when Noam Chomsky, in his review (1959) of Skinner’s Verbal Behavior (1957), argued successfully that no purely behaviorist account of language-learning is possible. Chomsky’s alternative, a theory of Universal Grammar, which owes some of its inspiration to Plato and Descartes, has made the idea of innate language structures, and perhaps other cognitive structures as well, seem a viable alternative to a more purely empiricist conception of cognitive development.

It is, however, the work of Jean Piaget that has been most influential on the way psychologists, educators, and even philosophers have come to think about the cognitive development of children. Piaget’s early work, The Child’s Conception of the World (1929), makes especially clear how philosophically challenging the work of a developmental psychologist can be. Although his project is always to lay out identifiable stages in which children come to understand what, say, causality or thinking or whatever is, the intelligibility of his account presupposes that there are satisfactory responses to the philosophical quandaries that topics like causality, thinking, and life raise.

Take the concept of life. According to Piaget this concept is acquired in four stages (Piaget, 1929, Chapter 6)

  • First Stage: Life is assimilated to activity in general

  • Second Stage: Life is assimilated to movement

  • Third Stage: Life is assimilated to spontaneous movement

  • Fourth Stage: Life is restricted to animals and plants

These distinctions are suggestive, but they invite much more discussion than Piaget elicits from his child subjects. What is required for movement to be spontaneous? Is a bear alive during hibernation? We may suppose the Venus flytrap moves spontaneously. But does it really? What about other plants? And then there is the question of what Piaget can mean by calling the thinking of young children “animistic,” if, at their stage of cognitive development, their idea of life is simply “assimilated to activity in general.”

Donaldson (1978) offers a psychological critique of Piaget on cognitive development. A philosophical critique of Piaget’s work on cognitive development is to be found in Chapters 3 and 4 of Matthews (1994). Interesting post-Piagetian work in cognitive development includes Cary (1985), Wellman (1990), Flavel (1995), Subbotsky (1996), and Gelman (2003).

Recent psychological research on concept formation has suggested that children do not generally form concepts by learning necessary and sufficient conditions for their application, but rather by coming to use prototypical examples as reference guides. Thus a robin (rather, of course, than a penguin) might be the child’s prototype for ‘bird’. The child, like the adult, might then be credited with having the concept, bird, without the child’s ever being able to specify, successfully, necessary and sufficient conditions for something to count as a bird. This finding seems to have implications for the proper role and importance of conceptual analysis in philosophy. It is also a case in which we should let what we come to know about cognitive development in children help shape our epistemology, rather than counting on our antecedently formulated epistemology to shape our conception of cognitive development in children (see Rosch and Lloyd, 1978, and Gelman, 2003).

Some developmental psychologists have recently moved away from the idea that children are to be understood primarily as human beings who lack the capacities adults of their species normally have. This change is striking in, for example, the work of Alison Gopnik, who writes: “Children aren’t just defective adults, primitive grownups gradually attaining our perfection and complexity. Instead, children and adults are different forms of homo sapiens. They have very different, though equally complex and powerful, minds, brains, and forms of consciousness, designed to serve different evolutionary functions” (Gopnik, 2009, 9). Part of this new respect for the capacities of children rests on neuroscience and an increased appreciation for the complexity of the brains of infants and young children. Thus Gopnik writes: “Babies’ brains are actually more highly connected than adult brains; more neural pathways are available to babies than adults.” (11)

3. Theories of Moral Development

Many philosophers in the history of ethics have devoted serious attention to the issue of moral development. Thus Plato, for example, offers a model curriculum in his dialogue, Republic, aimed at developing virtue in rulers. Aristotle’s account of the logical structure of the virtues in his Nicomachean Ethics provides a scaffolding for understanding how moral development takes place. And the Stoics (Turner and Matthews, 1998, 45–64) devoted special attention to dynamics of moral development.

Among modern philosophers, it is again Rousseau (1979) who devotes the most attention to issues of development. He offers a sequence of five age-related stages through which a person must pass to reach moral maturity: (i) infancy (birth to age 2); (ii) the age of sensation (3 to 12); (iii) the age of ideas (13 to puberty); (iv) the age of sentiment (puberty to age 20); and (v) the age of marriage and social responsibility (age 21 on). Although he allows that an adult may effectively modify the behavior of children by explaining that bad actions are those that will bring punishment (90), he insists that genuinely moral reasoning will not be appreciated until the age of ideas, at 13 and older. In keeping with his stage theory of moral development he explicitly rejects Locke’s maxim, ‘Reason with children,’ (Locke, 1971) on the ground that attempting to reason with a child younger than thirteen years of age is developmentally inappropriate.

However, the cognitive theory of moral development formulated by Piaget in The Moral Judgment of the Child (1965) and the somewhat later theory of Lawrence Kohlberg (1981, 1984) are the ones that have had most influence on psychologists, educators, and even philosophers. Thus, for example, what John Rawls has to say about children in his classic work, A Theory of Justice (1971) rests heavily on the work of Piaget and Kohlberg.

Kohlberg presents a theory according to which morality develops in approximately six stages, though according to his research, few adults actually reach the fifth or sixth stages. In this respect Kohlberg’s theory departs from classic stage theory, as in Piaget, since the sequence of stages does not culminate in the capacity shared by normal adults. However, Kohlberg maintained that no one skips a stage or regresses to an earlier stage. Although Kohlberg sometimes considered the possibility of a seventh or eighth stage, these are his basic six:

  • Level A. Premoral

    • Stage 1—Punishment and obedience orientation

    • Stage 2—Naive instrumental hedonism

  • Level B. Morality of conventional role conformity

    • Stage 3—Good-boy morality of maintaining good relations, approval by others

    • Stage 4—Authority-maintaining morality

  • Level C. Morality of accepted moral principles

    • Stage 5—Morality of contract, of individual rights and democratically accepted law

    • Stage 6—Morality of individual principles of conscience

Kohlberg developed a test, which has been widely used, to determine the stage of any individual at any given time. The test requires responses to ethical dilemmas and is to be scored by consulting an elaborate manual.

One of the most influential critiques of the Kohlberg theory is to be found in Carol Gilligan’s In a Different Voice (1982). Gilligan argues that Kohlberg’s rule-oriented conception of morality has an orientation toward justice, which she associates with stereotypically male thinking, whereas women and girls are perhaps more likely to approach moral dilemmas with a “care” orientation. One important issue in moral theory that the Kohlberg-Gilligan debate raises is that of the role and importance of moral feelings in the moral life (see the entry on feminist ethics).

Another line of approach to moral development is to be found in the work of Martin Hoffman (1982). Hoffman describes the development of empathetic feelings and responses in four stages. Hoffman’s approach allows one to appreciate the possibility of genuine moral feelings, and so of genuine moral agency, in a very small child. By contrast, Kohlberg’s moral-dilemma tests will assign pre-schoolers and even early elementary-school children to a pre-moral level.

A philosophically astute and balanced assessment of the Kohlberg-Gilligan debate, with appropriate attention to the work of Martin Hoffman, can be found in Pritchard (1991). See also Friedman (1987), Likona (1976), Kagan and Lamb (1987), and Pritchard (1996).

4. Children’s Rights

For a full discussion of children’s interests and children’s rights see the entry on the rights of children.

5. Childhood Agency and Autonomy

Clearly children are capable of goal-directed behavior while still relatively young, and are agents in this minimal sense. Respect for children’s agency is provided in legal and medical contexts, in that children who are capable of expressing their preferences are frequently consulted, even if their views are not regarded as decisive for determining outcomes.

The exercise of childhood agency will obviously be constrained by social and political factors, including various dependency relations, some of them imposed by family structures. Whether there are special ethical rules and considerations that pertain to the family in particular, and, if so, what these rules or considerations are, is the subject of an emerging field we can call ‘family ethics’ (Baylis and Mcleod 2014, Blustein, 1982, Brighouse and Swift 2014, Houlgate, 1980, 1999).

The idea that, in child-custody cases, the preferences of a child should be given consideration, and not just the “best interest” of the child, is beginning to gain acceptance in the U.S., Canada and Europe. “Gregory K,” who at age 12 was able to speak rationally and persuasively to support his petition for new adoptive parents, made a good case for recognizing childhood agency in a family court. (See “Gregory Kingsley” in the Other Internet Resources.) Less dramatically, in divorce proceedings, older children are routinely consulted for their views about proposed arrangements for their custody.

Perhaps the most wrenching cases in which adults have come to let children play a significant role in deciding their own future are those that involve treatment decisions for children with terminal illnesses. (Kopelman and Moskop, 1989) The pioneering work of Myra Bluebond-Langner shows how young children can come to terms with their own imminent death and even conspire, mercifully, to help their parents and caregivers avoid having to discuss this awful truth with them (Bluebond-Langner, 1980).

While family law and medical ethics are domains in which children capable of expressing preferences are increasingly encouraged to do so, there remains considerable controversy within philosophy as to the kind of authority that should be given to children’s preferences. There is widespread agreement that most children’s capacity to eventually become autonomous is morally important and that adults who interact with them have significant responsibility to ensure that this capacity is nurtured (Feinberg 1980). At the same time it is typical for philosophers to be skeptical about the capacity for children under the age of ten to have any capacity for autonomy, either because they are judged not to care stably about anything (Oshana 2005, Schapiro 1999), lack information, experience and cognitive maturity (Levinson 1999, Ross 1998), or are too poor at critical reflection (Levinson 1999).

Mullin (2007, 2014) argues that consideration of children’s capacity for autonomy should operate with a relatively minimal understanding of autonomy as self-governance in the service of what the person cares about (with the objects of care conceived broadly to include principles, relationships, activities and things). Children’s attachment to those they love (including their parents) can therefore be a source of autonomy. When a person, adult or child, acts autonomously, he or she finds the activity meaningful and embraces the goal of the action. This contrasts both with a lack of motivation and with feeling pressured by others to achieve outcomes desired by them. Autonomy in this sense requires capacities for impulse control, caring stably about some things, connecting one’s goals to one’s actions, and confidence that one can achieve at least some of one’s goals by directing one’s actions. It does not require extensive ability to engage in critical self-reflection, or substantive independence. The ability to act autonomously in a particular domain will depend, however, on whether one’s relationships with others are autonomy supporting. This is in keeping with feminist work on relational autonomy. See the entry on Feminist Perspectives on Autonomy.

Children’s autonomy is supported when adults give them relevant information, reasons for their requests, demonstrate interest in children’s feelings and perspectives, and offer children structured choices that reflect those thoughts and feelings. Support for children’s autonomy in particular domains of action is perfectly consistent with adults behaving paternalistically toward them at other times and in other domains, when children are ill-informed, extremely impulsive, do not appreciate the long-term consequences of their actions, cannot recognize what is in their interest, cannot direct their actions to accord with their interests, or are at risk of significant harm (Mullin 2014).

6. The Goods of Childhood

“Refrigerator art,” that is, the paintings and drawings of young children that parents display on the family’s refrigerator, is emblematic of adult ambivalence toward the productions of childhood. Typically, parents are pleased with, and proud of, the art their children produce. But equally typically, parents do not consider the artwork of their children to be good without qualification. Yet, as Jonathan Fineberg has pointed out (Fineberg, 1997, 2006), several of the most celebrated artists of the 20th century collected child art and were inspired by it. It may be that children are more likely as children to produce art, the aesthetic value of which a famous artist or an art historian can appreciate, than they will be able to later as adults.

According to what we have called the “Aristotelian conception”, childhood is an essentially prospective state. On such a view, the value of what a child produces cannot be expected to be good in itself, but only good for helping the child to develop into a good adult. Perhaps some child art is a counterexample to this expectation. Of course, one could argue that adults who, as children, were encouraged to produce art, as well as make music and excel at games, are more likely to be flourishing adults than those who are not encouraged to give such “outlets” to their energy and creativity. But the example of child art should at least make one suspicious of Michael Slote’s claim that “just as dreams are discounted except as they affect (the waking portions of) our lives, what happens in childhood principally affects our view of total lives through the effects that childhood success or failure are supposed to have on mature individuals” (Slote, 1983, 14).

Recent philosophical work on the goods of childhood (Brennan 2014, Macleod 2010) stresses that childhood should not be evaluated solely insofar as it prepares the child to be a fully functioning adult. Instead, a good childhood is of intrinsic and not merely instrumental value. Different childhoods that equally prepare children to be capable adults may be better or worse, depending on how children fare qua children. Goods potentially specific to childhood (or, more likely, of greatest importance during childhood) include opportunities for joyful and unstructured play and social interactions, lack of significant responsibility, considerable free time, and innocence, particularly sexual innocence. Play, for instance, can be of considerable value not only as a means for children to acquire skills and capacities they will need as adults, but also for itself, during childhood.

7. Philosophical Thinking in Children

For a full discussion of this topic see the entry on Philosophy for Children.

8. Moral Status of Children

It is uncontroversial to judge that what Mary Anne Warren terms paradigmatic humans have moral status (Warren 1992). Paradigmatic humans are adults with relatively standard cognitive capacities for self-control, self-criticism, self-direction, and rational thought, and are capable of moral thought and action. However, the grounds for this status are controversial, and different grounds for moral status have direct implications for the moral status of children. Jan Narveson (1988), for instance, argues that children do not have moral status in their own right because only free rational beings, capable of entering into reciprocal relations with one another, have fundamental rights. While Narveson uses the language of rights in his discussion of moral status (people have direct moral duties only to rights holders on his account), moral status need not be discussed in the language of rights. Many other philosophers recognize children as having moral status because of their potential to become paradigmatic humans without committing to children having rights. For instance, Allen Wood writes: “it would show contempt for rational nature to be indifferent to its potentiality in children.” (Wood 1998, 198)

When children are judged to have moral status because of their potential to develop the capacities of paradigmatic adults (we might call these paradigmatic children), this leaves questions about the moral status of those children who are not expected to live to adulthood, and those children whose significant intellectual disabilities compromise their ability to acquire the capacities of paradigmatic adults. There are then three common approaches that grant moral status to non-paradigmatic children (and other non-paradigmatic humans). The first approach deems moral consideration to track species membership. On this approach all human children have moral status simply because they are human (Kittay 2005). This approach has been criticized as being inappropriately speciesist, especially by animal rights activists. The second approach gives moral status to children because of their capacity to fare well or badly, either on straightforwardly utilitarian grounds or because they have subjective experiences (Dombrowski 1997). It has been criticized by some for failing to distinguish between capacities all or almost all human children have that are not also possessed by other creatures who feel pleasure and pain. The third approach gives moral status to non-paradigmatic children because of the interests others with moral status take in them (Sapontzis 1987), or the relationships they have with them (Kittay 2005)

Sometimes the approaches may be combined. For instance Warren writes that young children and other non-paradigmatic humans have moral status for two sorts of reasons: “their rights are based not only on the value which they themselves place upon their lives and well-being, but also on the value which other human beings place on them.” (1992. 197) In addition to these three most common approaches, Mullin (2011) develops a fourth: some non-paradigmatic children (and adults) have moral status not simply because others value them but because they are themselves capable of being active participants in morally valuable relationships with others. These relationships express care for others beyond their serving as means for one’s own satisfaction. Approaches to moral status that emphasize children’s capacity to care for others in morally valuable relationships also raise interesting questions about children’s moral responsibilities within those relationships (see Mullin 2010).

For more on this topic see the entry on the grounds of moral status.

9. Other Issues

The topics discussed above hardly exhaust the philosophy of childhood. Thus we have said nothing about, for example, philosophical literature on personhood as it bears on questions about the morality of abortion, or bioethical discussions about when it is appropriate for parents to consent to children’s participation in medical research or refuse medical treatment of their children. There has been increasing attention in recent years to questions about the appropriate limits of parental authority over children, about the source and extent of parents and the state’s responsibilities for children, and about the moral permissibility of parents devoting substantial resources to advancing the life prospects of their children. These and many other topics concerning children may be familiar to philosophers as they get discussed in other contexts. Discussing them under the rubric, ‘philosophy of childhood,’ as well in the other contexts, may help us see connections between them and other philosophical issues concerning children.

OMG! / I discover American Girl Dolls Cult

I googled something like, “How do American girls transition to adulthood?” and these dolls popped up all over my screen: Wikipedia says that their original “function” was to “teach girls about American history” through character dolls… hmmmm. First impression? They are UGLY and cheaply made … and they start at $115.00. Look at that bad wig, junky clothing and weird rubbery skin… No wonder American women have poor taste in clothing!

The dolls are supposedly targeting girls age 8-11. Remember from previous post, biological adulthood begins at puberty (age 10-12) for American “girls”…

I’m in shock: Wrong Planet shock.  

What is an Adult Human? / Biology Law Psychology Culture

Photo from Duke Health – group of 10-13 year olds. Biologically, they are adults. Legally they are not. Culturally? Psychologically? Big Questions.

Biological adulthood Wikipedia

Historically and cross-culturally, adulthood has been determined primarily by the start of puberty (the appearance of secondary sex characteristics such as menstruation in women, ejaculation in men, and pubic hair in both sexes). In the past, a person usually moved from the status of child directly to the status of adult, often with this shift being marked by some type of coming-of-age test or ceremony.[1]

After the social construct of adolescence was created, adulthood split into two forms: biological adulthood and social adulthood. Thus, there are now two primary forms of adults: biological adults (people who have attained reproductive ability, are fertile, or who evidence secondary sex characteristics) and social adults (people who are recognized by their culture or law as being adults). Depending on the context, adult can indicate either definition.

Although few or no established dictionaries provide a definition for the two word term biological adult, the first definition of adult in multiple dictionaries includes “the stage of the life cycle of an animal after reproductive capacity has been attained”.[2][3] Thus, the base definition of the word adult is the period beginning at physical sexual maturity, which occurs sometime after the onset of puberty. Although this is the primary definition of the base word “adult”, the term is also frequently used to refer to social adults. The two-word term biological adult stresses or clarifies that the original definition, based on physical maturity, is being used.

In humans, puberty on average begins around 10–11 years of age for girls and 11–12 years of age for boys, though this will vary from person to person. For girls, puberty begins around 10 or 11 years of age and ends around age 16. Boys enter puberty later than girls – usually around 12 years of age and it lasts until around age 16 or 17 (Or in rare cases 18 and a half).[4][5]

There seems to be disagreement on the attainment of adulthood: is it at the start or completion of puberty?

More from Duke Health: https://www.dukehealth.org/blog/when-puberty-too-early

When Is Puberty Too Early?

October 01, 2013

Early Puberty in Girls

For girls, puberty is generally considered to be too early if it begins at age seven or eight. African-American and Hispanic girls tend to start puberty slightly earlier than Caucasian girls. The average age of pubertal onset in girls is 10-and-a-half years old, but it ranges from seven to 13 years old. The average age of menarche is 12-and-a-half to 13 years of age. The whole process of puberty should take three to four years.

Rapidly progressing puberty — start to finish in less than two years — can be a concern as well because it can be due to an endocrine disorder

Early Puberty in Boys

For boys, puberty is generally considered too early before the age of nine years. In boys, onset of puberty is from nine to 14 years, but on average starts at 11-and-a-half to 12 years old. The whole process of puberty should take three to four years. Rapidly progressing puberty can also be a concern in males

Preventing Early Puberty

While genetic factors play a role in the early onset of puberty, parents can help delay the environmental causes of early puberty. Preventive measures include:

  • Encourage your child to maintain a healthy weight.
  • Avoid exposure to exogenous hormones like estrogen, testosterone, DHEA, androstenedione that may be found in creams/gels, hair treatments, medications, and nutritional supplements. (And who knows where else these powerful hormones are being used and entering environmental systems)

 Psychological Adulthood? 

Here is where we encounter the perils of “socially constructed” opinion about human development: What a mess!

Psychological development

Written By: The Editors of Encyclopedia Britannica

Psychological development, the development of human beings’ cognitive, emotional, intellectual, and social capabilities and functioning over the course of the life span, from infancy through old age. It is the subject matter of the discipline known as developmental psychology. Child psychology was the traditional focus of research, but since the mid-20th century much has been learned about infancy and adulthood as well. A brief treatment of psychological development follows. For full treatment, see human behaviour.

Infancy is the period between birth and the acquisition of language one to two years later.

Childhood is the second major phase in human development, childhood, extends from one or two years of age until the onset of adolescence at age 12 or 13.

Adolescence Physically, adolescence begins with the onset of puberty at 12 or 13 and culminates at age 19 or 20 in adulthood.

Hmmm…. a discrepancy of 7-8 YEARS between biological and psychological demarcation for the beginning of adulthood, that is, IF adulthood is the onset of puberty. IF it’s the completion of puberty – the discrepancy is more like 4-5 years.

But! We now have a serious problem: the socially constructed stage called adolescence, interferes with, and contradicts, the biological transition from pre-reproductive childhood, to reproductive adult with no clear transition at all. The result is chaos in education, legal jurisdiction, sex-reproduction-parenting, health, nutrition and behavioral expectations!

Adulthood is a period of optimum mental functioning when the individual’s intellectual, emotional, and social capabilities are at their peak to meet the demands of career, marriage, and children. Some psychologists delineate various periods and transitions in early to middle adulthood that involve crises or reassessments of one’s life and result in decisions regarding new commitments or goals. During the middle 30s people develop a sense of time limitation, and previous behaviour patterns or beliefs may be given up in favour of new ones.

Wow! Just how does a person between the ages of 10-20 years old negotiate this bizarre disconnect between a developmental paradigm “invented” by psychologists, and the physical reality of the human body?

One might expect individual cultures to “help” with this vital transition… 

Cultural Adulthood? 

How the American legal system defines adult status is a crucial cultural factor.  

Adult: A person who by virtue of attaining a certain age, generally eighteen, is regarded in the eyes of the law as being able to manage his or her own affairs.

Wow! Highly optimistic and unrealistic in American culture, which overwhelmingly advocates for the indefinite postponement of adulthood… 

Note that American education does little to nothing to prepare children, adolescents, and now “emerging adults” (a new category of underdeveloped Homo sapiens that is MEASURED BY the subjective “feeling” of being adult) for these sudden legal and financial facts of life.  This dithering over adult status is the “privilege” of the wealth classes; poor and minority children too often become “instant adults” – in a jail cell.  

The age specified by law, called the legal age of majority, indicates that a person acquires full legal capacity to be bound by various documents, such as contracts and deeds, that he or she makes with others and to commit other legal acts such as voting in elections and entering marriage. The age at which a person becomes an adult varies from state to state and often varies within a state, depending upon the nature of the action taken by the person. Thus, a person wishing to obtain a license to operate a motor vehicle may be considered an adult at age sixteen, but may not reach adulthood until age eighteen for purposes of marriage, or age twenty-one for purposes of purchasing intoxicating liquors.

Anyone who has not reached the age of adulthood is legally considered an infant. (!! Really?) West’s Encyclopedia of American Law, edition 2. Copyright 2008 The Gale Group, Inc. All rights reserved.

 

 

 

Self-mythologizing / Homo sapiens NT strikes again

Every once in awhile, I like to check in with neurotypical “pop science” versions of WHO WE ARE – narcissism knows no limits. 

From SLATE.com

Science / The state of the universe. (Not too pompous!)
Jan. 29 2013

Why Are We the Last Apes Standing?

There’s a misconception among a lot of us Homo sapiens that we and our direct ancestors are the only humans ever to have walked the planet. It turns out that the emergence of our kind isn’t nearly that simple. The whole story of human evolution is messy, and the more we look into the matter, the messier it becomes.

)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))

Before we go into this “messy” NT mythology – the author: His website is www.chipwalter.com

Welcome!

At last you have made your way to the website of Chip Walter. (Try to control your excitement.) If you’re a curious person – and your discovery of this site attests that you are – then you’ve arrived at the right place. Go ahead, browse…

Chip is a journalist, author, filmmaker and former CNN Bureau Chief. He has written four books, all of them, one way or another, explorations of human creativity, human nature and human curiosity. (That should be a warning: shameless BS to follow)

(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((

Paleoanthropologists have discovered as many as 27 different human species (the experts tend to debate where to draw the line between groups). These hominids diverged after our lineage split from a common ancestor we shared with chimpanzees 7 million years ago, give or take a few hundred millennia.

Many of these species crossed paths, competed, and mated. Populations ebbed and flowed in tight little tribes, at first on the expanding savannahs of Africa, later throughout Europe, Asia, and all the way to Indonesia. Just 100,000 years ago, there were several human species sharing the planet, possibly more: Neanderthals in Europe and West Asia, the mysterious Denisovan people of Siberia, the recently discovered Red Deer Cave people living in southern China, Homo floresiensis (the Hobbits of Indonesia), and other yet unknown descendants of Homo erectus who left indications that they were around (the DNA of specialized body lice, to be specific). And, of course, there was our kind, Homo sapiens sapiens (the wise, wise ones), still living in Africa, not yet having departed the mother continent. At most, each species consisted of a few tens of thousands of people hanging on by their battered fingernails. Somehow, out of all of these struggles, our particular brand of human emerged as the sole survivor and then went on, rather rapidly, to materially rearrange the world.

If there once were so many other human species wandering the planet, why are we alone still standing? After all, couldn’t another version or two have survived and coexisted with us on a world as large as ours? Lions and tigers coexist; so do jaguars and cheetahs. Gorillas, orangutans, bonobos, and chimpanzees do as well (though barely). Two kinds of elephants and multiple versions of dolphins, sharks, bears, birds, and beetles—countless beetles—inhabit the planet. Yet only one kind of human? Why?

More than once, one variety may have done in another either by murdering its rivals outright or outcompeting them for limited resources. But the answer isn’t as simple or dramatic as a war of extermination with one species turning on the other in some prehistoric version of Planet of the Apes. The reason we are still here to ruminate on why we are still here is because, of all those other human species, only we evolved a long childhood.

Over the course of the past 1.5 million years, the forces of evolution inserted an extra six years between infancy and pre-adolescence—a childhood—into the life of our species. And that changed everything.

Why should adding a childhood help us escape extinction’s pitiless scythe? Looked at logically, it shouldn’t. All it would seem to do is lengthen the time between birth and mating, which would slow down the clamoring business of the species’ own continuance. But there was one game-changing side effect of a long childhood. Those six years of life between ages 1 and 7 are the time when we lay the groundwork for the people we grow up to become. Without childhood you and I would never have the opportunity to step away from the dictates of our genes and develop the talents, quirks, and foibles that make us all the devastatingly charming, adaptable, and distinctive individuals we are.

Childhood came into existence as the result of a peculiar evolutionary phenomenon known generally as neoteny. (More about this sweeping misinterpretation later) The term comes from two Greek words, neos meaning “new” (in the sense of “juvenile”) and teinein meaning to “extend,” and it means the retention of youthful traits. In the case of humans, it meant that our ancestors passed along to us a way to stretch youth farther into life.

More than a million years ago, our direct ancestors found themselves in a real evolutionary pickle. One the one hand, their brains were growing larger than those of their rain forest cousins, and on the other, they had taken to walking upright because they spent most of their time in Africa’s expanding savannas. Both features would seem to have substantially increased the likelihood of their survival, and they did, except for one problem: Standing upright favors the evolution of narrow hips and therefore narrows the birth canal. And that made bringing larger-headed infants to full term before birth increasingly difficult.

If we were born as physically mature as, say, an infant gorilla, our mothers would be forced to carry us for 20 months! But if they did carry us that long, our larger heads wouldn’t make it through the birth canal. We would be, literally, unbearable. The solution: Our forerunners, as their brains expanded, began to arrive in the world sooner, essentially as fetuses, far less developed than other newborn primates, and considerably more helpless.

Bolk enumerated 25 specific fetal or juvenile features that disappear entirely in apes as they grow to adulthood but persist in humans. Flatter faces and high foreheads, for example, and a lack of body hair. The shape of our ears, the absence of large brow ridges over our eyes, a skull that sits facing forward on our necks, a straight rather than thumblike big toe, and the large size of our heads compared with the rest of our bodies. You can find every one of these traits in fetal, infant, or toddling apes, and modern human adults.

In the nasty and brutish prehistoric world our ancestors inhabited, arriving prematurely could have been a very bad thing. But to see the advantages of being born helpless and fetal, all you have to do is watch a 2-year-old. Human children are the most voracious learners planet Earth has ever seen, and they are that way because their brains are still rapidly developing after birth. Neoteny, and the childhood it spawned, not only extended the time during which we grow up but ensured that we spent it developing not inside the safety of the womb but outside in the wide, convoluted, and unpredictable world.

The same neuronal networks that in other animals are largely set before or shortly after birth remain open and flexible in us. Other primates also exhibit “sensitive periods” for learning as their brains develop, but they pass quickly, and their brain circuitry is mostly established by their first birthday, leaving them far less touched by the experiences of their youth.

The major problem with all this NT self-congratulatory aggrandizement is this: the equally possible scenario that this “open, externalized brain development” leaves human fetuses-infants-children highly vulnerable to disastrous consequences: death in infancy by neglect, disease and predation; maternal death, brain and nervous system damage due to not-so-healthy human environments, insufficient care and nutrition during critical post-birth growth, plus the usual demands and perils of nature.  And in “modern” societies, the necessity of a tremendous amount of medical-technological intervention in problem pregnancies: extreme premature birth, caesarian section delivery, long periods of ICU support, and growing incidence of life-long impairment.    

“Inattentional Blindness” to any negative consequences of human evolution is a true failure in NT perception of the human condition.

Based on the current fossil evidence, this was true to a lesser extent of the 26 other savanna apes and humans. Homo habilis, H. ergaster, H. erectus, even H. heidelbergensis (which is likely the common ancestor of Neanderthals, Denisovans, and us), all had prolonged childhoods compared with chimpanzees and gorillas, but none as long as ours. In fact, Harvard paleoanthropologist Tanya Smith and her colleagues have found that Neanderthals reversed the trend. By the time they met their end around 30,000 years ago, they were reaching childbearing age at about the age of 11 or 12, which is three to five years earlier than their Homo sapiens cousins. Was this in response to evolutionary pressure to accelerate childbearing to replenish the dwindling species? Maybe. But in the bargain, they traded away the flexibility that childhood delivers, and that may have ultimately led to their demise.

Aye, yai, yai! This string of NT echolalia, copied and pieced together from pop-science interpretations of “science projects” is worthy of Biblical mythology… a montage, a disordered mosaic; a collage of key words, that condenses millions of years of evolutionary change into a “slightly longer” (call it 6 million years instead of 6 thousand – sounds more scientific) – history of Creation… this is for neurotypical consumption: It’s okay… Evolution is really just magic, after all! 

We are different. During those six critical years, our brains furiously wire and rewire themselves, capturing experience, encoding and applying it to the needs of our particular life. Our extended childhood essentially enables our brains to better match our experience and environment. (Whatever that is supposed to mean – like wearing Bermuda shorts to the beach?) It is the foundation of the thing we call our personalities, the attributes that make you you and me me. Without it, you would be far more similar to everyone else, far less quirky and creative and less, well … you. Our childhood also helps explain how chimpanzees, remarkable as they are, can have 99 percent of our DNA but nothing like the same level of diversity, complexity, or inventiveness.

You are creative and quirky (dull and conformist) – and even if that’s a shameless lie (it is), AT LEAST you’re smarter than a chimpanzee!  

Our long childhood has allowed us to collectively engage in ever broadening conversations as we keep finding new ways to communicate; we jabber and bristle with invention and pool together waves of fresh ideas, good and bad, into that elaborate, rambling edifice we call human civilization. Without all of this variety, all of these interlocked notions and accomplishments, the world, for better or worse, would not be as it is, brimming with this species of self-aware conflicted apes, ingenious enough to rocket rovers off to Mars and construct the Internet, wage wars on international scales, invent both WMDs and symphonies. If not for our long childhoods, we would not be here at all, the last apes standing. Can we remain standing? Possibly. I’m counting on the child in us, the part that loves to meander and play, go down blind alleys, wonder why and fancy the impossible.

How shockingly stupid (and awful) writing. 

 

Psychologists Terrorize Children / “Emotional Regulation” Abuse

American Schools Are Failing Nonconformist Kids. Here’s How

In defense of the wild child

https://newrepublic.com/article/114527/self-regulation-american-schools-are-failing-nonconformist-kids

By Elizabeth Weil, September 2, 2013

The writing is cringe-worthy, especially abominations such as “valorize” and “valorizing” but that’s neurotypicals for you – novelty is irresistible, like glitter and mini cupcakes with blue icing and sprinkles. Highlights are mine. Comments.

Of the possible child heroes for our times, young people with epic levels of the traits we valorize, the strongest contender has got to be the kid in the marshmallow study. Social scientists are so sick of the story that some threaten suicide if forced to read about him one more time. But to review: The child—or really, nearly one-third of the more than 600 children tested in the late ’60s at Bing Nursery School on the Stanford University campus—sits in a room with a marshmallow. Having been told that if he abstains for 15 minutes he’ll get two marshmallows later, he doesn’t eat it. This kid is a paragon of self-restraint, a savant of delayed gratification. He’ll go on, or so the psychologists say, to show the straight-and-narrow qualities required to secure life’s sweeter and more elusive prizes: high SAT scores, money, health.

I began to think about the marshmallow kid and how much I wanted my own daughter to be like him one day last fall while I sat in a parent-teacher conference in her second-grade classroom and learned, as many parents do these days, that she needed to work on self-regulation. My daughter is nonconformist by nature, a miniature Sarah Silverman. She’s wildly, transgressively funny and insists on being original even when it causes her pain. The teacher at her private school, a man so hip and unthreatened that he used to keep a boa constrictor named Elvis in his classroom, had noticed she was not gently going along with the sit-still, raise-your-hand-to-speak-during-circle-time program. “So …” he said, in the most caring, best-practices way, “have you thought about occupational therapy?”

I did not react well. My husband reacted worse. I could appreciate the role of O.T., as occupational therapy is called, in helping children improve handwriting through better pencil grips. But I found other O.T. practices, and the values wrapped up in them, discomfiting: occupational therapists coaching preschoolers on core-muscle exercises so that they can sit longer; occupational therapists leading social-skills playgroups to boost “behavior management” skills. Fidget toys and wiggle cushions—O.T. staples aimed at helping children vent anxiety and energy—have become commonplace in grammar-school classrooms. Heavy balls and weighted blankets, even bags of rice, are also prescribed on the theory that hefty objects comfort children who feel emotionally out of control. Did our daughter need what sounded like a paperweight for her young body in order to succeed at her job as a second-grader?

Are mainstream classrooms being redesigned under the assumption that all children are autistic or behaviorally impaired? 

My husband grilled the teacher. How were her reading skills? What about math? Did she have friends?

All good, the teacher reassured us.

“So what’s the problem?” my husband asked. “Is she distracting you?”

The teacher stalled, then said yes.

“And have you disciplined her?”

He had not.

This is when I began to realize we’d crossed some weird Foucaultian threshold into a world in which authority figures pathologize children instead of punishing them.

No – psychology provides pathologies to JUSTIFY the same old “right and obligation” granted those in authority, to punish children and “lesser” humans.  

Self-regulation,” “self- discipline,” and “emotional regulation” are big buzz words in schools right now. All are aimed at producing “appropriate” behavior, at bringing children’s personal styles in line with an implicit emotional orthodoxy. That orthodoxy is embodied by a composed, conforming kid who doesn’t externalize problems or talk too much or challenge the rules too frequently or move around excessively or complain about the curriculum or have passionate outbursts. He’s a master at decoding expectations. He has a keen inner minder to bring rogue impulses into line with them.

Emotional regulation is psychology’s new pet field. Before 1981, a single citation for the term existed in the literature. For 2012 alone, Google Scholar turns up more than 8,000 hits. In popular culture, self-regulation is celebrated in best-selling education books, like Paul Tough’s How Children Succeed, manuals for success in a meritocracy extolling a pull-your-socks-up way of being. Some of Tough’s ideas are classically liberal, built off Nobel Prize–winning economist James Heckman’s theory of human capital and the importance of investing in the very young. But then the book turns toward the character-is-destiny model pioneered by University of Pennsylvania psychology professor Angela Duckworth and the KIPP charter-school network. The key to success, in this formulation, is grit. (Though Duckworth acknowledges on her own website that nobody is sure how to teach it.) One KIPP school features a tiled mosaic that reads, “DON’T EAT THE MARSHMALLOWS YET!”

“Long may this book dwell on the best-seller lists!” Nicholas Kristof wrote in The New York Times, giving How Children Succeed a hearty endorsement. Yet though widely embraced by progressives, the grit cure-all is in many ways deeply conservative, (Puritanical / Liberal / Old Testament actually, in the American version of religious  pedagogy) arguably even a few inches to the right of Amy Chua and her Battle Hymn of the Tiger Mother. The parent of the well-regulated child should not, like Chua, need to threaten to burn her daughter’s stuffie if that daughter is curious or self-indulgent, AWOL (or god-forbid, dawdling) somewhere between school, soccer practice, and the piano tutor. The child should be equipped with an internal minder. No threats necessary.

But at what cost? One mother I spoke to, a doctor in Seattle, has a son who has had trouble sitting cross-legged, as his classroom’s protocol demanded. The school sent home a note suggesting she might want to test him for “learning difference.” She did—“paid about two thousand dollars for testing,” she told me—and started the child in private tutoring. “After the third ride home across the city with him sobbing about how much he hated the sessions, we decided to screw it,” she said. She later learned every one of the boys in her son’s class had been referred out for testing. Another family, determined to resist such intervention, paid for an outside therapist to provide expert testimony to their son’s Oakland school stating that he did not have a mental health disorder. (So much for “innocent until proven guilty“ – human rights are being trampled, right and left) We wanted them to hear from the therapist directly: He’s fine,” the mother said. “Being a very strong-willed individual—that’s a powerful gift that’s going to be unbelievably awesome someday.”

In the meantime, he’s part of an education system (a victim, rather) that has scant tolerance for independence of mind. “We’re saying to the kid, ‘You’re broken. You’re defective,’ ” says Robert Whitaker, author of Mad in America. “In some ways, these things become self-fulfilling prophesies.”

Education is the business of shaping people. (Social-engineering) It works, however subtly, toward an ideal. At various points, the ideal products of the American school system have been extroverts and right-handed children. (Lefties were believed to show signs of “neurological insult or physical malfunctioning” and had to be broken of their natural tendency.) Individuality has had its moments as well. In the 1930s, for instance, educators made huge efforts to find out what motivated unique students to keep them from dropping out because no jobs existed for them to drop into. Yet here in 2013, even as the United States faces pressure to “win the future,” the American education system has swung in the opposite direction, toward the commodified data-driven ideas promoted by Frederick Winslow Taylor, who at the turn of the century did time-motion studies of laborers carrying bricks to figure out how people worked most efficiently. Borrowing Taylor’s ideas, school was not designed then to foster free thinkers. Nor is it now, thanks to how teacher pay and job security have been tied to student performance on standardized tests. (A red herring – this has nothing to do with accountability)  “What we’re teaching today is obedience, conformity, following orders,” says the education historian Diane Ravitch, author of The Death and Life of the Great American School System. “We’re certainly not teaching kids to think outside the box.” The motto of the so-called school-reform movement is: No Excuses. “The message is: It’s up to you. Grit means it’s your problem. Just bear down and do what you have to do.”

American education has always taught obedience, conformity, and following orders; the difference is that we used to throw in basic reading, writing and arithmetic skills so that “the peasants” could read The Bible and perform basic job tasks.   

As a consumer of education—both as a child and a parent—I’d never thought much about classroom management. The field sounds technical and dull, inside baseball for teachers. Scratch two inches below the surface, however, and it becomes fascinating, political philosophy writ small. Is individuality to be contained or nurtured? What relationship to authority do teachers seek to create?

One way to think about classroom management (and discipline in general) is that some tactics are external and others are internal. External tactics work by inflicting an embarrassing or unpleasant experience on the kid. The classic example is a teacher shaming a child by making him write “I will not …” whatever on the blackboard 100 times. My own second-grade teacher threw a rubber chicken at a boy who refused to shut up during silent reading. But such means have become “well, problematic,” says Jonathan Zimmerman, director of the History of Education Program at New York University. In 1975, in Goss v. Lopez, the Supreme Court found schoolchildren to have due process rights. “As a result, students can say to teachers with some authority, ‘If you do that, my mom is going to sue you.’ And that changes the score.”

In Goss’s wake, many educators moved toward what progressive education commentator Alfie Kohn calls the New Disciplines. The philosophy promotes strategies like “shared decision-making,” allowing children to decide between, say, following the teacher’s rules and staying after school for detention. This sounds great to the contemporary ear. The child is less passive and prone to be a victim, more autonomous and in control of his life. But critics of the technique are harsh. It’s “fundamentally dishonest, not to mention manipulative,” Kohn has written. “To the injury of punishment is added the insult of a kind of mind game whereby reality is redefined and children are told, in effect, that they wanted something bad to happen to them.”

A different, utopian approach to classroom management works from the premise that children are natively good and reasonable. If one is misbehaving, he’s trying to tell you that something is wrong. Maybe the curriculum is too easy, too hard, too monotonous. Maybe the child feels disregarded, threatened, or set up to fail. It’s a pretty thought, order through authentic, handcrafted curricula. But it’s nearly impossible to execute in the schools created through the combination of No Child Left Behind and recessionary budget-slashing. And that makes internal discipline very convenient right now.

To train this vital new task, schools have added to reading,’riting, and ’rithmetic a fourth R, for self-regulation. The curricular branch that has emerged to teach it is called social and emotional learning, or SEL. Definitions of SEL are tautological. The Collaborative for Academic, Social, and Emotional Learning (CASEL) defines it as involving “the processes of developing social and emotional competencies” toward the goal of making a child a “good student, citizen, and worker” who is less inclined to exhibit bad behaviors, like using drugs, fighting, bullying, or dropping out of school.

The aim is to create a “virtuous cycle” of behavior. As Celene Domitrovich, director of research at CASEL, told me, SEL instructs children in “the skills that undergird” grit. “Paul Tough doesn’t talk about SEL, even though his whole book is about it,” says Domitrovich. “Tenacity, grit, motivation, stick-to-it-iveness—we’re all talking about the same thing.”

CASEL was founded by Daniel Goleman, the former New York Times reporter whose 1995 blockbuster book, Emotional Intelligence, was based on the work of two psychology professors, John Mayer and Peter Salovey. (Salovey clearly has all kinds of intelligence. He’s now president of Yale University.) Emotional intelligence sounds unassailably great. Who wouldn’t want high ratings for oneself or one’s children, especially given Goleman’s claim that emotional intelligence is a more powerful predictor of career success than IQ? Besides, SEL filled a need. On top of the discipline vacuum* created by the Goss ruling, in the 1990s, says Domitrovich, “you start having school shootings. There’s a surge of interest in the idea of prevention—bullying prevention, character development.” * Discipline vacuum? A consequence of Americans equating discipline with physical punishment. Take away paddling, smacking, hitting and humiliation-shaming, and – well, there is no other discipline, is there? Read your Bible!) 

Now that is a perverted line of thinking! School shootings can be “prevented” by mass behavioral indoctrination and social coercion from birth, a program, which in itself, is a human rights catastrophe! Psycho-social Eugenics…  

Since then, CASEL has been pushing hard. It’s an advocacy group. The NoVo Foundation, run by Warren Buffett’s son Peter and Peter’s wife, Jennifer—and endowed with roughly $140 million worth of Berkshire Hathaway stock—has taken up social and emotional learning as one of its four primary philanthropic interests. SEL is now mandated at all grade levels in Illinois. Some form of it is taught in half of school districts in the United States.

Certain SEL lessons are embedded into school practices like “morning meeting.” The peace table at my daughter’s school, inspired by psychologist Thomas Gordon’s suite of alternatives to “power-based” classroom management techniques, is sort of an SEL extracurricular. Anyone can call a peace table to address a grievance, which can range from I think you smacked that tetherball into my head on purpose to I’d like to hang out more with your best friend. At the table, the children complete a worksheet. When you ______, I feel _______. I need you to _______.

SEL curricula also offer direct instruction on discrete skills. For example, a teacher might do an active-listening exercise, laying out the components—you look the other person in the eye, you’re quiet when they talk—then asking the children to role-play. This, of course, is a useful life habit and a dream to a lecturing teacher. Yet Domitrovich takes it further. “You can see where it’s so obvious that this is essential to learning. What if a child is not good at stopping and calming down? What if a child is really impulsive? What if a child is not good at getting along with everybody? How’s that going to play out?” To her, the answer is clear. The other students in the class are going to ignore and exclude the poorly regulated child. As a result, that child is not going to be “learning optimally.” Academics will suffer due to deficient social and emotional skills.

Is this not an “underhanded” way to single out ASD / Asperger children for “retraining” as social clones? Impose a “behavior regime” that is so strict that such children will not be able to comply, and “self-diagnose” 

The only problem is: It’s not clear that’s true. In 2007, Greg Duncan, a professor of education at the University of California at Irvine, did an analysis of the effects of social and emotional problems on a sample of 25,000 elementary school students. He found, he says, “Emotional intelligence in kindergarten was completely unpredictive.” Children who started school socially and emotionally unruly did just as well academically as their more contained peers from first through eighth grades. David Grissmer, at the University of Virginia, reran Duncan’s analysis repeatedly, hoping to prove him wrong. Instead, he confirmed that Duncan was right. A paper from Florida International University also found minimal correlation between emotional intelligence and college students’ GPAs.

In 2011, CASEL volleyed back at the skeptics, publishing a gigantic meta-analysis (213 studies, 270,034 students) claiming that SEL programs raised academic performance by 11 percent. Such a large and divergent finding sent up a red flag for NurtureShock co-author Ashley Merryman, who’d read just about every published study relating to emotional intelligence and academic achievement while researching the book. So she examined CASEL’s source studies and discovered that only 33 of the 213 reported any academic results at all. She also uncovered a far more likely reason for CASEL’s fortuitous finding: Many of the students in the sample populations received academic tutoring. (Exploitive capitalists…let’s label these people for who they really are.) 

In 2007 a UNICEF paper on child wellbeing ranked England dead last in the 21 developed nations it surveyed. (Apparently all those books and movies about horrid British childhoods are accurate.) SEL, the British hoped, would make its children emotionally healthy. The Department of Education rolled out programs countrywide. Six years later, England’s experience with SEL (or SEAL, as they call it) offers some cautionary tales. For starters, the programs didn’t seem to work as hoped—or, as an official 2010 brief reported politely, “[O]ur data was not congruent with the broader literature” promising “significant improvements in a range of outcomes.”

Among the most cutting assessments of the British SEL experiment is an ethnographic study called “Social and Emotional Pedagogies: Critiquing the New Orthodoxy of Emotion in Classroom Behaviour Management,” by Val Gillies, a professor of social and policy studies at London South Bank University. Gillies describes the new emotional orthodoxy as a “calm, emotionally flat ideal” that “not only overlies a considerably more turbulent reality, [but] also denies the significance of passion as a motivator.” In theory, SEL gives less well-regulated children a more stable foundation from which to learn. In reality, writes Gillies, “Pupils who dissent from sanctioned models of expression are marked out as personally lacking.” (Shaming, blaming, social exile – same old religious imperative) 

According to the human development theory of Dandelion and Orchid children, certain people are genetically predisposed to grow fairly well in almost any environment while others wilt or blossom spectacularly depending on circumstances and care. Some kids—the dandelions—seem naturally suited to cope with the current system. As Sanford Newmark, head of the Pediatric Integrative Neurodevelopmental Program at the University of California at San Francisco, puts it, “You can feed them three Pop-Tarts for breakfast, they can be in school twelve hours a day, and they can go to kindergarten when they’re four, and they would still do OK.” But many children crumble.

That is, these kids will take any abuse psychologists can think of, and thus become “good neurotypical idiots”.

“We’ve been around for a couple hundred thousand years, reading only for the last five thousand years, and compulsory education has only been in place for one hundred fifty years or so. Some kids are going to be thinking, ‘Why is my teacher asking me to do this? My brain doesn’t work this way,’ ” says Stephen Hinshaw, a psychology professor at the University of California at Berkeley. Heidi Tringali, an occupational therapist in Charlotte, North Carolina, offers a hypothesis built on shorter-term influences: Many of the nonconforming children she treats may need wiggle cushions and weighted balls because they’ve grown up strapped into the five-point harnesses of strollers and car seats, planted in front of screens, and put to sleep at night flat on their backs, all of which leaves them craving action, sensation, and attention when they’re finally let loose. “Every child in the school system right now has been impacted. Of course they’re all licking their friends and bouncing off the walls.”

One crude way to measure the population of kids who don’t meet today’s social and behavioral expectations is to look at the percentage of school-aged children diagnosed with attention- deficit hyperactivity disorder (ADHD). Over the past ten years, that figure has risen 41 points. (A lot of these kids were just born at the wrong time of year. The youngest kindergarteners, by month of birth, are more than twice as likely than the oldest to be labeled with ADHD. This makes sense given that the frontal cortex, which controls self-regulation, thickens during childhood. (More pseudoscientific mumbo jumbo) – The cortexes of children diagnosed with ADHD tend to reach their thickest point closer to age eleven than age eight.) The number climbs higher still if you include syndromes like sensory-processing disorder, which Newmark jokes just about “everybody” has these days.

When I asked Zimmerman, the New York University education historian, if schools had found a way to deal with discipline in the wake of the students-rights movement, he said: “Oh we have. It’s called Ritalin.” (And dozens of other psychoactive drugs) 

The Torrance Tests of Creative Thinking judge originality, emotional expressiveness, humor, intellectual vitality, open-mindedness, and ability to synthesize and elaborate on ideas. Since 1984, the scores of America’s schoolchildren have dropped by more than one standard deviation; that is to say, 85 percent of kids scored lower in 2008 than their counterparts did in 1984. Not coincidentally, that decrease happened as schools were becoming obsessed with self-regulation. (More pseudoscientific psychology mumbo jumbo)  

As Stanford Professor James Gross, author of Handbook of Emotional Regulation, explains, suppression of feelings is a common regulatory tactic. It’s mentally draining. Deliberate acts of regulation also become automatic over time, meaning this habit is likely to interfere with inspiration, which happens when the mind is loose and emotions are running high. Even Tough acknowledges in a short passage in How Children Succeed that overly controlled people have a hard time making decisions: They’re often “compulsive, anxious, and repressed.” Last year, a study out of the University of Rochester took on the marshmallow kid himself and challenged his unconditional superiority. What if the second treat won’t always be available later? There can be an opportunity cost to not diving in right away. (Mumbo jumbo; it never ends) 

Valorizing self-regulation shifts the focus away from an impersonal, overtaxed, and underfunded school system and places the burden for overcoming those shortcomings on its students. “Even people who are politically liberal suddenly sound like right-wing talk-show hosts when the subject turns to children and education,” says Alfie Kohn. “ The problem is with the individual.’ That is right-wing orthodoxy. (It’s also Puritanical American faux-liberalism) 

Maybe the reason we let ourselves become fixated on children’s emotional regulation is that we, the adults, feel our lives are out of control. We’ve lost faith in our ability to manage our own impulses around food, money, politics, and the distractions of modern life—and we’re putting that on our kids. (Neoteny is a fatal condition: no adults to apply common sense or critical thinking to stabilize social systems) “It’s a displacement of parental unease about the future and anxiety about the world in general,” says psychologist Wendy Mogel, author of The Blessing of a Skinned Knee. “I’m worried our kids are going to file the largest class-action suit in history, because we are stealing their childhoods. They’re like caged animals or Turkish children forced to sew rugs until they go blind. We’re suppressing their natural messy existence.” (OMG!) 

I do worry about my little Sarah Silverman. She’s frenetic and disinhibited. My life would be easier if she liked to comply. But we did not send her to O.T. Parents make judgment calls about interventions all the time. What’s worth treating: a prominent birthmark? A girl with early puberty? Social and behavioral issues can be especially tricky, as diagnosing comes close to essentializing: It’s not your fault that you’re acting this way, honey. It’s just who you are. As one mother told me: “The insidious part is, you can start losing faith in your child. You go down this road …” Your child’s teacher tells you your child is not showing appropriate emotional regulation. You’re directed toward psychological evaluations and therapists. They have a hammer. Your kid becomes the nail. “The saddest, most soul-crushing thing is the negative self-image. We think kids don’t understand what’s happening, but they do. There’s this quiet reinforcement that something is wrong with them. That’s the thing that’ll kill.”

___________________________________________________________________________________________

Okay, so parents exist who realize the terrible situation in American schools; the damage being done to their children, the injustice of an out-of-control social-psychology monster taking over our schools and families, and yet, there is a passive attitude behind their lackluster complaints; a lack of proper adult anger and action that is instinctual in parents, but instead there is willingness to sacrifice their child’s well-being to the social order – and in some measure, with concern for their own social status.  

The natural adult response is to protect one’s child above all other considerations; it’s instinctual. That’s the price of neoteny: failure to act. 

 

Sunday Statement / Religion is…

Relationships do not give people everything they expect: the gap is filled in by God, or Jesus, or material wealth – the fantasy friends who will give you everything you want.

Religion, is in fact, a statement of disappointment in what other human beings can provide.  

Nature provides what other humans cannot. Reject nature, and starve.

Wild Children / Folklore, fairy tales, mythic living

http://windling.typepad.com/blog/2013/06/into-the-woods-9-wild-children.html

Original post has a beautiful array of illustrations….

Notes from a Dartmoor studio on folklore, fairy tales, fantasy, mythic arts & mythic living 

by Terri Windling

Into the Woods, 10: Wild Children

Today I’m on the trail of the Wild Children of myth, lore, and fantasy: children lost in the forest, abandoned, stolen, reared by wild animals, and those for whom wilderness is their natural element and home.

Tales of babies left in the woods (and other forms of wilderness) can found in the myths, legends, and sacred texts of cultures all around the globe. The infant is usually of noble birth, abandoned and left to certain death in order to thwart a prophesy — but fate intervenes, the child survives and is raised by wild animals, or by humans who live on the margins of the wild: shepherds, woodsmen, gamekeepers, and the like. When the child grows up, his or her true identity is revealed and the prophesy is fulfilled.

In Persian legends surrounding Cyrus the Great, for example, it is prophesized at his birth that he will grow up to take the crown of his grandfather, the King of Media. The king orders the baby killed and Cyrus is left on a wild mountainside, where he’s rescued either by the royal herdsman or a bandit (depending on the version of the tale) and raised in safety. He grows up, learns his true parentage, and not only captures the Median throne but goes on to conquer most of central and southeast Asia.

In Assyrian myth, a fish-goddess falls in love with a beautiful young man, gives birth to a half-mortal daughter, abandons the child in the wilderness, and then kills herself in shame. The baby is fed by doves and survives to be found and raised by a royal shepherd…and grows up to become Semiramis, the great Warrior Queen of Assyria.

In Greek myth, Paris, the son of King Priam, is born under a prophesy that he will one day cause the downfall of Troy. The baby is left on the side of Mount Ida, but he’s suckled by a bear and manages to live — growing up to fall in love with Helen of Troy and spark the Trojan War.

From Roman myth comes one of the most famous babes-in-the-wood stories of all, the legend of Remus and Romulus. Numitor, the good King of Alba Long, is overthrown by Amulius, his wicked brother, and his daughter is forced to become a Vestal Virgin in order to end his line. Though locked in a temple, the girl becomes pregnant (with the help of Mars, the god of war) and gives birth to a beautiful pair of sons: Remus and Romulus. Amulius has the twins exposed on the banks of the Tiber, expecting them to perish; instead, they are suckled and fed by a wolf and a woodpecker, and survive in the woods. Adopted by a shepherd and his wife, they grow up into noble, courageous young men and discover their true heritage — whereupon they overthrow their great-uncle, restore their grandfather to his throne, and, just for good measure, go on to found the city of Rome.

In Savage Girls and Wild Boys: A History of Feral Children, Michael Newton delves into the mythic symbolism inherent in the moment when abandoned children are saved by birds or animals. “Restorations and substitutions are at the very heart of the Romulus and Remus story,” he writes; “brothers take the rightful place of others, foster parents bring up other people’s children, the god Mars stands in for a human suitor. Yet the crucial substitution occurs when the she-wolf saves the lost children. In that moment, when the infants’ lips close upon the she-wolf’s teats, a transgressive mercy removes the harmful influence of a murderous culture. The moment is a second birth: where death is expected, succor is given, and the children are miraculously born into the order of nature. Nature’s mercy admonishes humanity’s unnatural cruelty: only a miracle of kindness can restore the imbalance created by human iniquity.” 

In myth, when we’re presented with children orphaned, abandoned, or raised by animals, it’s generally a sign that their true parentage is a remarkable one and they’ll grow up to be great leaders, warriors, seers, magicians, or shamans. As they grow, their beauty, or physical prowess or magical abilities betray a lineage that cannot be hidden by their humble upbringing. (Rarely do we encounter a mythic hero whose origins are truly low; at least one parent must be revealed as noble, supernatural, or divine.) After a birth trauma and a miraculous survival always comes a span of time symbolically described as “exile in the wilderness,” where they hone their skills, test their mettle, and gather their armies, their allies, or their magic, before returning (as they always do) to the world that is their birthright.

When we turn to folk tales and fairy tales, however, although we also find stories of children abandoned in the wild and befriended by animals, the tone and intent of such tales is markedly different. Here, we’re not concerned with the affairs of the gods or with heroes who conquer continents — for folk tales in the Western tradition, unlike myths and hero epics, were passed through the centuries primarily by storytellers of lower classes (usually women), and tended to be focused on themes more relevant to ordinary people. Abandoned children in fairy tales (like Hansel and Gretel, Little Thumbling, or the broommaker’s twins in The Two Brothers) aren’t destined for greatness or infamy; they are exactly what they appear to be: the children of cruel or feckless parents. Such parents exist, they have always existed, and fairy tales  did not gloss over these dark facts of life. Indeed, they confronted them squarely. The heroism of such children lies not in the recovery of a noble lineage but in the ability to survive and transform their fate — and to outwit those who would do them harm without losing their lives, their souls, or their humanity in the process.

Children also journey to the forest of their own accord, but usually in response to the actions of adults: they enter the woods at a parent’s behest (Little Red Riding Hood), or because they’re not truly wanted at home (Hans My Hedgehog), or in order to flee a wicked parent, step-parent, or guardian (Seven Swans, Snow White and Brother & Sister). Disruption of a safe, secure home life often comes in the form a parent’s remarriage: the child’s mother has died and a heartless, jealous step-mother has taken her place. The evil step-mother is so common in fairy tales that she has become an iconic figure (to the bane of real step-mothers everywhere), and her history in the fairy tale canon is an interesting one. In some tales, she didn’t originally exist. The murderous queen of Snow White, for example, was the girl’s own mother in the oldest versions of the story (the Brothers Grimm changed her into a step-parent in the 19th century) — whereas other stories, such as Cinderella and The Juniper Tree, have featured second wives since their earliest known tellings.

Some scholars who view fairy tales in psychological terms believe that the “good mother” and “bad step-mother” symbolize two sides of a child’s own mother: the part they love and the part they hate. Casting the “bad mother” as a separate figure, they say, allows the child to more safely identify such socially unacceptable feelings. While this may be true, it ignores the fact that fairy tales were not originally stories specially intended for children. And, as Marina Warner points out (in From the Beast to the Blonde), this “leeches the history out of fairy tales. Fairy or wonder tales, however farfetched the incidents they include, or fantastic the enchantments they concoct, take on the color of the actual circumstance in which they were or are told. While certain structural elements remain, variant versions of the same story often reveal the particular conditions of the society in which it is told and retold in this form. The absent mother can be read as literally that: a feature of the family before our modern era, when death in childbirth was the most common cause of female mortality, and surviving orphans would find themselves brought up by their mother’s successor.”

We rarely find step-fathers in fairy tales, wicked or otherwise, but the fathers themselves can be treacherous. In stories like Donkeyskin, Allerleirauh, and The Handless Maiden, for example, it is a cowardly, cruel, or incestuous father who forces his daughter to flee to the wild. Even those fathers portrayed more sympathetically as the dupes of their black-hearted wives are still somewhat suspect: they are happy at the story’s end to have their children return unscathed, but are curiously powerless or unwilling to protect them in the first place. Though the father is largely absent from tales such as Cinderella, The Seven Swans, and Snow White, the shadow he casts over them is a large one. He is, as Angela Carter has pointed out,  “the unmoved mover, the unseen organizing principle. Without the absent father there would have been no story because there would have been no conflict.”

Family upheaval has another function in these tales, beyond reflecting real issues encountered in life: it propels young heroes out of their homes, away from all that is safe and familiar; it forces them onto the unknown road to the dark of the forest. It’s a road that will lead, after certain tests and trials, to personal and worldly transformation, pushing the hero past childhood and pointing the way to a re-balanced life — symbolized by new prosperity, or a family home that has been restored, or (for older youths) a wedding feast at the end of the tale. These young people are “wild” only for a time: it’s a liminal state, a rite-of-passage that moves the hero from one distinct phase of life to another. The forest, with all its wonders and terrors, is not the final destination. It is a place to hide, to be tested, to mature. To grow in strength, wisdom, and/or power. And to gain the tools needed to return to the human world and repair what’s been broken…or build anew.

In one set of folk tales, however, children who disappear into the woods do not often return: the “changeling” stories of babies (and older children)  stolen by faeries, goblins, and trolls. Why, we might ask, are the denizens of Faerie so interested in stealing the offspring of mortals? Some faery lore suggests that the children are destined for lives as servants or slaves of the Faerie court; or that they are kept, in the manner of pets, for the amusement of their faery masters. Other stories and ballads (Tam Lin, for example) speak of a darker purpose: that the faeries must pay a tithe of blood to the Devil every seven years, and prefer to pay with mortal blood rather than blood of their own. In other traditions, however, it’s simply the beauty of the children that attracts the faeries, who are also known to kidnap pretty young men and women, artists, poets, and musicians.

The ability of faeries to procreate is a debatable issue in faery lore. Some stories maintain that the faeries do procreate, though not as often as humans. By occasionally interbreeding with mortals and claiming mortal babes as their own, they bring new blood into the Faerie Realm and keep their bloodlines strong. Other tales suggest that they cannot breed, or do so with such rarity that jealousy of human fertility is the motive behind child-theft.

Some stolen children, the tales tell us, will spend their whole lives in the Faerie Realm, and may even find happiness there, losing all desire for the lands of men. Other tales tell us that human children cannot thrive in the otherworld, and eventually sicken and die for want of mortal food and drink. Some faeries maintain their interest in child captives only during their infancy, tossing the children out of the Faerie Realm when they show signs of age. Such children, restored to the human world, are not always happy among their own kind, and spend their mortal lives pining for a way to return to Faerie.

Another type of story that comes from the deep, dark forest is the Feral Child tale, found in the shadow realm that lies between legend and fact.  There have been a number of cases throughout history of young children discovered living in the wild, a few of which have been documented to a greater or lesser degree. Generally, these seem to be children who have been abandoned or fled abusive homes, often at such a young age that they’ve ceased to remember any other way of life. Attempts to “civilize” these children, to teach them language, and to curb their animal-like behaviors, are rarely entirely successful — which leads to all sorts of questions about what it is that shapes human culturalization as we know it.

One of the most famous of these children was Victor, the Wild Boy of Avignon, discovered on a mountainside in France in the early 19th century. His teacher, Jean-Marc Gaspard Itard, wrote an extraordinary account of his six years with the boy — a document which inspired François Truffaut’s film The Wild Child, and Mordicai Gerstein‘s wonderful novel The Wild Boy. In an essay for The Horn Book, Gerstein wrote: “Itard’s reports not only provide the best documentation we have of a feral child, but also one of the most thoughtful, beautifully written, and moving accounts of a teacher pupil relationship, which has as its object nothing less than learning to be a human being (or at least what Itard, as a man of his time, thought a human being to be)…. Itard’s ambition to have Victor speak ultimately failed, but even if he had succeeded, he could never know Victor better or be more truly, deeply engaged with him than those evenings, early on, when they sat together as Victor loved to, with the boy’s face buried in the man’s hands. But the more Itard taught Victor, the more civilized he became, the more the distance between them grew.” (You’ll find Gerstein’s full essay here; scroll to the bottom of the page.)

In India in the 1920s two small girls were discovered living in the wild among a pack of wolves. They were captured (their “wolf mother” shot) and taken into an orphanage run by a missionary, Reverend Joseph Singh. Singh attempted to teach the girls to speak, walk upright, and behave like humans, not as wolves — with limited success.  His diaries make for fascinating (and horrifying) reading. Several works of fiction were inspired by this story, but the ones I particularly recommend are Children of the Wolf, a poignant children’s novel by Jane Yolen, and “St. Lucy’s Home for Girls Raised by Wolves,” a wonderful short story by Karen Russell (published in her collection of the same title). Also, Second Nature by Alice Hoffman is an excellent contemporary novel on the Feral Child theme.

More recently, in 1996, an urban Feral Child was discovered living with a pack of dogs on the streets of Moscow. He resisted capture until the police finally separated the boy from his pack. “He had been living on the street for two years,” writes Michael Newton. “Yet, as he had spent four years with a human family [before this], he could talk perfectly well. After a brief spell in a Reutov children’s shelter, Ivan started school. He appears to be just like any other Moscow child. Yet it is said that, at night, he still dreams of dogs.”

When we read about such things as adults and parents, the thought of a child with no family but wolves or dogs is a deeply disturbing one. . .but when we read from a child’s point of view, there is something secretly thrilling about the idea of life lived among an animal pack, or shedding the strictures of civilization to head into the woods. In this, of course, lies the enduring appeal of stories like Rudyard Kipling’s The Jungle Book and Edgar Rice Burroughs’s Tarzan of the Apes. Explaining his youthful passion for such tales, Mordecai Gerstein writes: “The heart of my fantasy was leaving the human world for a kind of jungle Eden where all one needed was readily available and that had, in Kipling’s version, less hypocrisy, more nobility. I liked best the idea of being protected from potential enemies by powerful animal friends.”

And here we begin to approach another aspect of Wild Child (and Orphaned Hero) tales that makes them so alluring to many young readers: the idea that a parentless life in the wild might be a better, or a more exciting, one. For children with difficult childhoods, the appeal of running away to the forest is obvious: such stories provide escape, a vision of life beyond the confines of a troubled home. But even children from healthy families need fictional escape from time to time. In the wild, they can shed their usual roles (the eldest daughter, middle son, the baby of the family, etc.) and enter other realms in which they are solitary actors. Without adults to guide them (or, contrarily, to restrict them), these young heroes are thrown back, time and time again, on their own resources. They must think, speak, act for themselves. They have no parental safety net below. This can be a frightening prospect, but it is also a liberating one — for although there’s no one to catch them if they fall, there’s no one to scold them for it either.

J.M. Barrie addresses this theme, of course, in his much-loved children’s fantasy Peter Pan — which draws upon Scottish changeling legends, twisted into interesting new shapes. Barrie’s Peter is human-born, not a faery, but he’s lived in Never Land so long that he’s as much a faery as he is a boy: magical, capricious, and amoral. He’s a complex mixture of good and bad, with little understanding of the difference between them — both cruel and kind, thoughtless and generous, arrogant and tender-hearted, bloodthirsty and sentimental. This dual nature makes Peter Pan a classic trickster character, kin to Puck, Robin Goodfellow, and other delightful but exasperating sprites of faery lore: both faery and child, mortal and immortal, villain (when he lures children from their homes) and hero (when he rescues them from pirates).

Peter’s last name derives from the Greek god Pan, the son of the trickster god Hermes by a wood nymph of Arcadia. Pan is a creature of the wilderness, associated with vitality, virility, and ceaseless energy. Like Peter, the god Pan is a contradictory figure. He haunts solitary mountains and groves, where he’s quick to anger if he’s disturbed, but he also loves company, music, dancing, and riotous celebrations. He is the leader of a woodland band of satyrs — but these “Lost Boys” are a wilder crew than Peter’s, famed for drunkenness, licentiousness, and creating havoc (or “panic”). Pan himself is a distinctly lusty god — and here the comparison must end, for Peter’s wildness has no sexual edge. Indeed, it’s sex and the other mysteries of adulthood that he’s specifically determined to avoid. (“You mustn’t touch me. No one must ever touch me,” Peter tells Wendy.)

Although Peter Pan makes a brief appearance in Barrie’s 1902 novel The Little White Bird, his story as we know it now really began as a children’s play, which debuted on the London stage in 1904. The playscript was subsequently published under the title Peter Pan, or the Boy Who Wouldn’t Grow Up; and eventually Barrie novelized the story in the book Peter and Wendy. (It’s a wonderful read in Barrie’s original text, full of sharp black humor.) Peter and Wendy ends with a poignant scene that does not exist in the play: Peter comes back to Wendy’s window years later, and discovers she is all grown up. The little girl in the nursery now is Wendy’s own daughter, Jane. The girl examines Peter with interest, and soon she’s off to Never Land herself…where Wendy can no longer go, no matter how much she longs to follow.

The fairy tale forest, like Never Land, is not a place we are meant to remain, lest, like Peter or the children stolen by faeries, we become something not quite human. Young heroes return triumphant from the woods (trials completed, curses broken, siblings saved, pockets stuffed with treasure), but the blunt fact is that they must return. In the old tales, there is no sadness in this, no lingering, backward glance to the forest; the stories end “happily ever after” with the children restored to the human world. In this sense, the wild depths of the wood represent the realm of childhood itself, and the final destination is an adulthood rich in love, prosperity, and joy.  From Victorian times onward, however, a new note of regret creeps in at the end of the story. A theme that we find over and over again in Victorian fantasy literature is that magic and wonder are accessible only to children, lost on the threshold of adulthood. From Lewis Carroll’s “Alice” books to J. M. Barrie’s Peter Pan, these writers grieved that their wise young heroes would one day grow up and leave the woods behind.

Of course, many of us never do leave the woods behind: we return through the pages of magical books and we return in actuality, treasuring our interactions with the wild world through all the years of our lives. But that part of the forest specific to childhood does not truly belong to us now — and that’s exactly as it should be. Each generation bequeaths it to the next. Our job as adults, as I see it, is to protect that enchanted place by  preserving wilderness and stories both. Our job is to open the window at night and to watch from the shadows as Peter arrives; it’s our children’s turn to step over the sill. Our job is to teach them to navigate by the stars and to bless them on their way.

Barrie was wrong, by the way, for we adults have our owns forms of magic too, and the wild wood still welcomes us. But it’s right, I think, that there should be a corner of it forever marked “Grown-ups, keep out!” Where children are heroes of their own stories, kings and queens of their own wild worlds.

Physical Education and Sport / Ancient Times to Enlightenment

EUROPEAN JOURNAL OF EDUCATIONAL RESEARCH / Vol. 2, No. 4, 191-202 / ISSN 2165-8714 Copyright © 2013 EUJER

“Bikini Girls” exercising, Sicily, 4th C. AD

https://files.eric.ed.gov/fulltext/EJ1086323.pdf

Harmandar Demirel & Yıldıran / Dumlupinar University, Gazi University, Turkey

(I’ve broken the text into shorter paragraphs for easier reading and omitted some introductory material. Complete pdf is about 8 pages. I’ve high-lightened a few main ideas and vocabulary.)

My general comment is that American Public Education is essentially less “sophisticated” than even Ancient Greece and Rome; a disgrace and “Medieval”…

An Overview from the Ancient Age to the Renaissance

The Greek educational ideal which emerged during the 8th – 6th centuries B.C. aimed at developing general fitness via “gymnastics” and the “music” of the body; that is, the development of body and spirit in a harmonic body and, in this way, providing a beautiful body, mental development and spiritual and moral hygiene. These are expressed by the word Kalokagathia, meaning both beautiful and good, based on the words “Kalos” and “Agathos” (Aytaç, 1980; Alpman, 1972). Thus, the use of physical training and sport as the most suitable means as discussed first in Ancient Greece (Yildiran, 2005). To achieve the ideal of kalokagathia, three conditions were required: nobility, correct behaviour and careful teaching (Yildiran, 2011). Physical beauty (kalos) did not refer just to external appearance; it also referred to mental health. Humans who had these qualifications were considered ideal humans (kalokagathos) (Bohus, 1986). The idea of the Kalokagathia ideal, which was developed during the early classical age, had seen archaic-aristocratic high value “arete”s thinned and deepened (Popplow, 1972).

The vital point of aristocratic culture was physical training; in a sense, it was sport. The children were prepared for various sport competitions under the supervision of a paidotribes (a physical education teacher) and learned horse riding, discus and javelin throwing, long jumping, wrestling and boxing. The aim of the sport was to develop and strengthen the body, and hence, the character (Duruskken, 2001). In Ancient Greece, boys attended wrestling schools because it was believed that playing sports beautified the human spirit as well as the body (Balcı, 2008). The palaestra was a special building within ancient gymnasiums where wrestling and physical training were practiced (Saltuk, 1990). The education practiced in this era covered gymnastic training and music education, and its aim was to develop a heroic mentality, but only for royalty. With this goal in mind, education aimed to discipline the body, raising an agile warrior by developing a cheerful and brave spirit (Aytac, 1980).

The feasts which were held to worship the gods in Ancient Greece began for the purpose of ending civil wars. All sport-centred activities were of religious character. As the ancient Olympic Games were of religious origin, they were conducted in Olympia. (Home of the gods) Over time, running distances increased, new and different games were added to the schedule, soldiers began to use armour in warfare, art and philosophy were understood better and great interest was shown in the Olympic Games; therefore, the program was enriched and changed, and the competitions were increased from one to five days (Er et al., 2005). However, the active or passive attendance of married women was banned at the ancient Olympic Games for religious reasons (Memis and Yıldıran, 2011). The Olympic Games had an important function as one of the elements aimed at uniting the ancient Greeks culturally, but this ended when the games were banned by Emperor Theodosius 1st in 393-4 A.D. (Balci, 2008).

Sparta, which is located in the present-day Mora peninsula, was an agricultural state that had been formed by the immigration of Dors from the 8th century B.C. Spartan education provided an extremely paternalistic education, which sought the complete submergence of the individual in the citizen and provided him with the attributes of courage, complete obedience and physical perfection (Cordasco, 1976). In Sparta, where the foundations of social order constituted iron discipline, military proficiency, strictness and absolute obedience, the peaceful stages of life had the character of a “preparation for the war school” (Aytac, 1980). The essential thing that made Hellenic culture important was its gaining new dimensions with distinctive creative power regarding cultural factors that this culture had adopted from the ancient east, and its revealing of the concept of the “perfect human” (Iplikcioglu, 1997).

Children stayed with their family until they were seven years old; from this age, they were assigned to the state-operated training institutes where they were trained strictly in war and state tasks. Strengthening the body and preparing for war took a foremost place in accordance with the military character of the state. Girls were also given a strict military training (Aytac, 1980). The same training given to the boys was also given to the girls. The most prominent example of this is the girls and boys doing gymnastics together (Russel, 1969). Although physical training and music education were included, reading, writing and arithmetic were barely included in Spartan education (Binbasioglu, 1982).

Unlike Sparta, the classical period of Athenian democracy (Athens had advanced trade and industry) included the Persian Wars and Peloponnese Wars, and Cleisthenes’ democratic reforms and the ending of sea domination in domestic policy. As this democracy covered “the independent layer”, it took the form of an “aristocratic democracy” (Aytaç, 1980). Learning was given great importance in the Athenian democracy. The sons of independent citizens received education in grammar and at home or private school. Music education and gymnastic training were carried out in “Gymnasiums” and “Palestrae”, which were built and controlled by the state; running areas were called “Dramos”, and chariot race areas were termed “Hippodromes” (Aytac, 1980). Children older than 12 years started receiving sports training and music education in Athens, where the military training was barely included.

Athenians insisted on the aesthetical and emotional aspects of education. Therefore, the best art works of the ancient world were created in this country (Binbasioglu, 1982). As in the 5th century B.C., Greek education was unable to appropriately respond to new developments; Sophists emphasised the development of traditional education in terms of language and rhetoric in an attempt to overcome the crisis. Sophists provided education in the morals, law, and the natural sciences in addition to the trivium, grammar, rhetoric, dialectic) (Aytac, 1980).

Greeks considered physical training prudent and important because it developed the body and organised games conducive to the gathering of large crowds; in these games, all regions of Greece were represented (Balci, 2008). Rome constitutes the second most important civilisation of the Ancient age. In Rome, the family played the strongest role in education, and the state did not have much say or importance. While exercise constituted the means of education in Ancient Rome, the purpose of this education was “to raise a good citizen”, such that each person had a skilled, righteous and steady character. Physical training was provided in addition to courses such as mythology, history, geography, jurisprudence, arithmetic, geometry and philosophy; this training was provided in Grammar schools, where basic teaching covered the “Seven free arts” (Aytac, 1980).

Due to the Scholastic structure of the Middle Ages, values respecting the human were forgotten. However, the “Renaissance” movement, which started in Europe and whose ideas inform the modern world, developed many theories related to education and physical training and attempted to apply this in various ways; the development of these ideas was continued in “The Age of Enlightenment”.

The Renaissance General Aspects of the Renaissance

The word renaissance means “rebirth”; in this period, artists and philosophers tried to discover and learn the standards of Ancient Rome and Athens (Perry et al., 1989). In the main, the Renaissance represented a protest of individualism against authority in the intellectual and social aspects of life (Singer, 1960). Renaissance reminded “Beauty’’ lovers of the development of a new art and imagination. From the perspective of a scientist, the Renaissance represented innovation in ancient sciences, and from the perspective of a jurist, it was a light shining over the shambles of old traditions.

Human beings found their individuality again during this era, in which they tried to understand the basics of nature and developed a sense of justice and logic. However, the real meaning of “renaissance” was to be decent and kind to nature (Michelet, 1996). The Renaissance was shaped in Italy beginning from the 1350s as a modern idea contradicting the Middle Ages. The creation of a movement for returning to the old age with the formidable memories of Rome naturally seemed plausible (Mcneill, 1985). New ideas that flourished in the world of Middle Age art and developed via various factors did not just arise by accident; incidents and thoughts that developed in a social context supported it strongly (Turani, 2003). Having reached its climax approximately in the 1500s, the Italian Renaissance constituted the peak of the Renaissance; Leonardo da Vinci observed the outside world, people and objects captiously via his art and Niccolo Machiavelli’s drastically analysed nature and use of politics through his personal experiences and a survey of classical writers (Mcneill, 1985).

The Concept of Education and Approaches to Physical Training during the Renaissance

The humanist education model, which was concordant with the epitomes of the Renaissance, was a miscellaneous, creative idea. Its goal was to create an all-round advanced human being, “homo universale”. At the same time, such an educational epitome necessarily gained an aristocratic character. This educational epitome no longer provided education to students at school (Aytac, 1980).

In 14th century, the “humanist life epitome” was claimed. The humanism movement was gradually developing and spreading; however, in this phase, humanism-based formation or practice was not in question. In the history of humanity, the humanism period has been acknowledged as a ‘transitional period’. Modern civilisation and education is based on this period. Philosophers, such as Erasmus, Rabelais, Montaigne and Luther, flourished during this period. Universities began to multiply, and latitudinarianism was created. Scholastic thought was shaken from its foundations at the beginning of this period via the influence of Roger Bacon (scientist), who lived during the 13th Century.

Original forms of works constituting the culture of Ancient Athens and Rome were found, read, and recreated concordantly; moreover, the ideas of latitudinarian, old educators such as Quintilianus were practiced. In teaching methods, formulae enabling pupils to improve their skills and abilities were adopted. Students started to learn outdoors, in touch with nature. Strict disciplinary methods gave way to rather tolerant methods. The importance and value of professional education were acknowledged (Binbasioglu, 1982). Positive sciences, such as history, geography and natural history were not given a place in the classroom for a long time, but Latin preserved its place until recent times (Aytac, 1980).

With Desiderius von Erasmus, who was alive during the height of European humanism, humanism adopted its first scientific principle: “Return to sources!’’; for this reason, the works of ancient writers were published. Erasmus’ educational epitome consists of a humanist-scientific formulation; however, it does not externalise the moral-religious lifestyle. Having worked to expand humanity into higher levels, Erasmus summarises the conditions for this quest as follows: good teachers, a useful curriculum, good pedagogical methods, and paying attention to personal differences among pupils. With these ideas, Erasmus represents the height of German humanist pedagogy (Aytaç, 1980).

Notice the antagonistic set up between faith and science we still experience today in the U.S.?

On the other hand, Martin Luther considered universities as institutions where “all kinds of iniquity took place, there was little faith to sacred values, and the profane master Aristotle was taught imprudently” and he demanded that schools and especially universities be inspected. Luther thought that schools and universities should teach religiously inclined youth in a manner heavily dependent on the Christian religion (Aytac, 1980). Alongside these ideas, Luther made statements about the benefits of chivalric games and training, and of wrestling and jumping to health, which, in his opinion, could make the body more fit (Alpman, 1972).

The French philosopher Michel de Montaigne, known for his “Essays”, was a lover of literature who avoided any kind of extreme and was determined, careful and balanced. In his opinion, the aim of education was to transfer “ethical and scientific knowledge via experiments’’ to pupils. De Montaigne believed that a person’s skills and abilities in education, which can be called natural powers, are more important than or even superior to logic and society (Binbasioglu, 1982). The Humanist movement has played a very significant role in educational issues. This movement flourished in order to resurrect the art and culture of ancient Athens and Rome with their formidable aspects, thereby enabling body and soul to improve concordantly with the education of humans (Alpman, 1972). Humanism was not a philosophical system but a cultural and educational program (Kristeller, 1961).

Note that in the United States, current public education is obsessed with “social engineering” based on two religious ideologies: (1. liberal / puritanical – (social and psychological theory-based; conformity to prescriptive “absolutes” of human behavior.) 2.  evangelical – anti-science, faith-based denial of reality; socio-emotional fervor.) These competing religious systems have replaced a brief period of “humanist” academic emphasis; the arts and physical education have been jettisoned, supposedly due to “budget” limitations… but this elimination of “expressions of individual human value” is a choice made by parents and educators to “ban” secular ideals from education)  

The necessity of physical training along with education of soul and mind has been emphasised; for this reason, physical practices and games have been suggested for young people. It is possible to see how the humanists formed the foundations of the Renaissance, beginning from the 14th century to the 18th century and working from Italy to Spain, Germany, France and England. Almost all of the humanists stated the significance of physical training in their written works on education (Alpman, 1972).

One of the humanists, Vittorino da Feltre may have viewed it as the most pleasant goal of his life to raise a group of teenagers and fed and educated poor but talented children at his home (Burckhardt, 1974). Feltre practiced a classical education in his school called “Joyful Residence”. In accord with Ancient Greek education concepts, he claimed that benefits were provided by the education of body and soul through daily exercises such as swimming, riding and swordplay, and generating love towards nature via hiking; he also emphasised the importance of games and tournaments (Alpman, 1972; Aytac, 1980). Enea Silvio de Piccolomini is also worthy of attention; alongside his religious character, he thought that physical training should be emphasised and that beauty and power should be improved in this way (Alpman, 1972). de Piccolomini attracted attention to the importance of education as a basis for body and soul while stressing the importance of avoiding things that cause laxity, games and resting (Aytac, 1980). Juan Ludwig Vives, a systematic philosopher who had multiple influences, in one of his most significant works “De Tradendis Disciplinis”, which was published in 1531, advised such practices as competitive ball playing, hiking, jogging, wrestling and braggartism, beginning from the age of 15 (Alpman, 1972).

The German humanist Joachim Camerarius, who managed the academic gymnasium in the city of Nürnberg, is also very important in relation to this subject. Having practicing systematic physical training at the school in which he worked, Camerarius wrote his work, “Dialogus de Cymnasis”, which refers to the pedagogical and ethical values of Greek gymnastics. In this work, he stressed such practices as climbing, jogging, wrestling, swordplay, jumping, stone throwing and games that were practiced by specially selected children according to their ages and physical abilities, all under the supervision of experienced teachers (Alpman, 1972). The Italian Hieronymus Mercurialis’ De Arte Gymnastica, first published in Latin in Venice in 1569, contained very little on the Olympic Games. Indeed, the author was hostile to the idea of competitive athletics. The Frenchman Petrus Faber’s Agonisticon (1592), in its 360 pages of Latin text, brought together in one place many ancient texts concerning the Olympics but was disorganised, repetitive and often unclear (Lee, 2003). The first part of the De Arte Gymnastica included the definition of Ancient Greek gymnastics and an explanation of actual terminology whereas the second part contained precautions about the potential harms of exercises practiced in the absence of a doctor. Moreover, he separated gymnastics practised for health reasons from military gymnastics (Alpman, 1972).

Note the military requirement for it’s personnel to be “physically fit” compared to the general U.S. population, (including children), which is chronically obese, sedentary and unhealthy. “Being physically fit” (at least the appearance of) is now a status symbol of the wealth classes and social celebrities, requiring personal trainers, expensive spa and gym facilities, and high-tech gadgets and equipment.    

The Transition to the Age of Enlightenment: Reformation, Counter-reformation and the Age of Method

The Age of Reformation: The most significant feature of European cultural life during this age was the dominant role played by religious issues, unlike the Renaissance in Italy (Mcneill, 1985). This age symbolises the uprising of less civilised societies against logic-dominated Italy (Russell, 2002). Bearing a different character from Renaissance and Humanism, the Reformation did not stress improvements in modern art or science, but rather improvements in politics and the Church; consonant with this, its education epitome emphasised being religious and dependent on the Church. Nevertheless, both Humanism and the Reformation struggled against Middle Ages scholasticism, and both appreciated the value of human beings (Aytac, 1980).

The Counter-reformation Movement: In this period, which includes the movement of the Catholic church to retake privileges that it had lost due to the Reformation, the “Jesuit Sect’’ was founded to preach, confess and collect “perverted minds’’ once again under the roof of the Catholic church via teaching activities (Aytac, 1980).

The Age of Method: Also known as the Age of Practice, this period saw efforts to save people from prejudice, and principles for religion, ethics, law and state were sought to provide systematic knowledge in a logic-based construction. Aesthetic educational approaches, which were ignored by religion and the Church because of the attitudes prevailing during the Reformation and Counterreformation, were given fresh emphasis. Bacon, Locke, Ratke, Komensky, Descartes and Comenius are among the famous philosophers who lived during this period (Aytac, 1980).

The Age of Enlightenment General Features and Educational Concepts of the Enlightenment

The Enlightenment Period had made itself clear approximately between 1680 and 1770 or even 1780. Science developed into separate disciplines, literature became an independent subject, and it was demanded that history also become independent (Chaunu, 2000). During this period, educators transformed the concept of education from preparing students for the afterlife into preparing them for the world around them, so that they could be free and enlightened.

Moreover, educators of the period were usually optimistic and stressed the importance of study and work. At school, students were educated in such a way as to engrain a love of nature and human beings. Based on these ideas, learning was undertaken by experiment and experience (Binbasioglu, 1982). William Shakespeare mentioned the concept of “Fair Play” and the ideas of “maintain equality of opportunity” and “show the cavalier style of thinking” at the end of the 16th century; by the 18th century, these ideas were included in sport (Gillmeister, 1988). Systematic changes in the foundations of the principles of fair play that occurred in the 19th century were directly related to the socio-cultural structure of Victorian England (Yildiran, 1992).

The Concept of Physical Training during the Enlightenment and Its Pioneers Ideas and epitomes produced prior to this period were ultimately practiced in this period. Respected educators of the period stressed the significance of physical training, which appealed only to the aristocracy during the Renaissance; simulating the education system of the Ancient Age, educators started to address everyone from all classes and their views spread concordantly in this period.

John Locke: The Enlightenment reached maturity during the mid-to late eighteenth century. John Locke, a lead player in this new intellectual movement (Faiella, 2006), was likely the most popular political philosopher during the first part of the 18th century, who stressed the necessity of education (Perry et al., 1989). Locke’s “Essay on Human Intellect” is acknowledged as his most prominent and popular work (Russell, 2002). His work, “Notions of Education” stressed the importance of child health, advised children to learn swimming and to maintain their fitness. Moreover, Locke noted that such activities as dance, swordplay and riding were essential for a gentleman (Alpman, 1972) and that education should be infused with game play (Binbaşıoğlu, 1982).

Jean Jacques Rousseau: in his work, Emile, the philosopher from Geneva discussed educational matters in regard to the principles of nature (Russell, 2002). In this work, which he wrote in (1762) Rousseau argued that individuals should learn from nature, human beings or objects (Perry et al., 1989), and expressed his notions concerning the education of children and teenagers (Binbasioglu, 1982). Rousseau held that children should be allowed to develop and learn according to their natural inclinations, but in Emile, this goal was achieved by a tutor who cunningly manipulated his pupil’s responses (Damrosch, 2007). The aforesaid education was termed “Natural education’’ of the public or “education which will create natural human beings’’ (Aytaç, 1980). Emile exercised early in the morning because he needed strength, and because a strong body was the basic requirement for a healthy soul. Running with bare feet, high jumping, and climbing walls and trees, Emile mastered such skills as jogging, swimming, stone throwing, archery and ball games. Rousseau demanded that every school would have a gymnasium or an area for training (Alpman, 1972).

Continued next post. Time to watch the Olympics!