Philosophy of Childhood / Stanford

I’m presenting this as a review of where many of our ideas about children, childhood, and “who has rights and who doesn’t” originate – in human thought and ideas (brains), that is, in consequence of poor reasoning, prejudice, personal bias, and thoughtful consideration; by means of accurate and faulty observation, careless assumptions and even (rarely) by clever insight, and not in universal law, in a pre-existing  supernatural realm or in a realm of magical authority.

What we see again, is the lack of coherence between modern Western social-psychological-cultural theory and biological reality. 

https://plato.stanford.edu/entries/childhood/

From: Stanford Encyclopedia of Philosophy

The Philosophy of Childhood

The philosophy of childhood has recently come to be recognized as an area of inquiry analogous to the philosophy of science, the philosophy of history, the philosophy of religion, and the many other “philosophy of” subjects that are already considered legitimate areas of philosophical study. In addition, philosophical study of related topics (such as parental rights, duties and responsibilities) has flourished in recent years. The philosophy of childhood takes up philosophically interesting questions about childhood, changing conceptions over time about childhood and attitudes toward children; theories of cognitive and moral development; children’s interests and children’s rights, the goods of childhood; children and autonomy; the moral status of children and the place of children in society. As an academic subject, the philosophy of childhood has sometimes been included within the philosophy of education (e.g., Siegel, 2009). Recently, however, philosophers have begun to offer college and university courses specifically in the philosophy of childhood. And philosophical literature on childhood, parenting and families is increasing in both quantity and quality.

 1. What is a Child?

Almost single-handedly, Philippe Ariès, in his influential book, Centuries of Childhood (Ariès, 1962), made the reading public aware that conceptions of childhood have varied across the centuries. The very notion of a child, we now realize, is both historically and culturally conditioned. But exactly how the conception of childhood has changed historically and how conceptions differ across cultures is a matter of scholarly controversy and philosophical interest (see Kennedy, 2006). Thus Ariès argued, partly on the evidence of depictions of infants in medieval art, that the medievals thought of children as simply “little adults.” Shulamith Shahar (1990), by contrast, finds evidence that some medieval thinkers understood childhood to be divided into fairly well-defined stages. And, whereas Piaget claims that his subjects, Swiss children in the first half of the 20th Century, were animistic in their thinking (Piaget, 1929), Margaret Mead (1967) presents evidence that Pacific island children were not.

One reason for being skeptical about any claim of radical discontinuity—at least in Western conceptions of childhood—arises from the fact that, even today, the dominant view of children embodies what we might call a broadly “Aristotelian conception” of childhood. According to Aristotle, there are four sorts of causality, one of which is Final causality and another is Formal Causality. Aristotle thinks of the Final Cause of a living organism as the function that organism normally performs when it reaches maturity. He thinks of the Formal Cause of the organism as the form or structure it normally has in maturity, where that form or structure is thought to enable the organism to perform its functions well. According to this conception, a human child is an immature specimen of the organism type, human, which, by nature, has the potentiality to develop into a mature specimen with the structure, form, and function of a normal or standard adult. 

Many adults today have this broadly Aristotelian conception of childhood without having actually read any of Aristotle. It informs their understanding of their own relationship toward the children around them. Thus they consider the fundamental responsibility they bear toward their children to be the obligation to provide the kind of supportive environment those children need to develop into normal adults, with the biological and psychological structures in place needed to perform the functions we assume that normal, standard adults can perform.

Two modifications of this Aristotelian conception have been particularly influential in the last century and a half. One is the 19th century idea that ontogeny recapitulates phylogeny (Gould, 1977), that is, that the development of an individual recapitulates the history and evolutionary development of the race, or species (Spock, 1968, 229). This idea is prominent in Freud (1950) and in the early writings of Jean Piaget (see, e.g. Piaget, 1933). Piaget, however, sought in his later writings to explain the phenomenon of recapitulation by appeal to general principles of structural change in cognitive development (see, e.g., Piaget, 1968, 27).

The other modification is the idea that development takes places in age-related stages of clearly identifiable structural change. This idea can be traced back to ancient thinkers, for example the Stoics (Turner and Matthews, 1998, 49). Stage theory is to be found in various medieval writers (Shahar, 1990, 21–31) and, in the modern period, most prominently in Jean-Jacques Rousseau’s highly influential work, Emile (1979). But it is Piaget who first developed a highly sophisticated version of stage theory and made it the dominant paradigm for conceiving childhood in the latter part of the 20th Century (see, e.g., Piaget, 1971).

Matthews (2008, 2009), argues that a Piagetian-type stage theory of development tends to support a “deficit conception” of childhood, according to which the nature of the child is understood primarily as a configuration of deficits—missing capacities that normal adults have but children lack. This conception, he argues, ignores or undervalues the fact that children are, for example, better able to learn a second language, or paint an aesthetically worthwhile picture, or conceive a philosophically interesting question, than those same children will likely be able to do as adults. Moreover, it restricts the range and value of relationships adults think they can have with their children.

Broadly Aristotelian conceptions of childhood can have two further problematic features. They may deflect attention away from thinking about children with disabilities in favour of theorizing solely about normally developing children (see Carlson 2010), and they may distract philosophers from attending to the goods of childhood when they think about the responsibilities adults have towards the children in their care, encouraging focus only on care required to ensure that children develop adult capacities.

How childhood is conceived is crucial for almost all the philosophically interesting questions about children. It is also crucial for questions about what should be the legal status of children in society, as well as for the study of children in psychology, anthropology, sociology, and many other fields.

2. Theories of Cognitive Development

Any well-worked out epistemology will provide at least the materials for a theory of cognitive development in childhood. Thus according to René Descartes a clear and distinct knowledge of the world can be constructed from resources innate to the human mind (Descartes, PW, 131). John Locke, by contrast, maintains that the human mind begins as a “white paper, void of all characters, without any ideas” (Locke, EHC, 121). On this view all the “materials of reason and knowledge” come from experience. Locke’s denial of the doctrine of innate ideas was, no doubt, directed specifically at Descartes and the Cartesians. But it also implies a rejection of the Platonic doctrine that learning is a recollection of previously known Forms. Few theorists of cognitive development today find either the extreme empiricism of Locke or the strong innatism of Plato or Descartes completely acceptable.

Behaviorism has offered recent theorists of cognitive development a way to be strongly empiricist without appealing to Locke’s inner theater of the mind. The behaviorist program was, however, dealt a major setback when Noam Chomsky, in his review (1959) of Skinner’s Verbal Behavior (1957), argued successfully that no purely behaviorist account of language-learning is possible. Chomsky’s alternative, a theory of Universal Grammar, which owes some of its inspiration to Plato and Descartes, has made the idea of innate language structures, and perhaps other cognitive structures as well, seem a viable alternative to a more purely empiricist conception of cognitive development.

It is, however, the work of Jean Piaget that has been most influential on the way psychologists, educators, and even philosophers have come to think about the cognitive development of children. Piaget’s early work, The Child’s Conception of the World (1929), makes especially clear how philosophically challenging the work of a developmental psychologist can be. Although his project is always to lay out identifiable stages in which children come to understand what, say, causality or thinking or whatever is, the intelligibility of his account presupposes that there are satisfactory responses to the philosophical quandaries that topics like causality, thinking, and life raise.

Take the concept of life. According to Piaget this concept is acquired in four stages (Piaget, 1929, Chapter 6)

  • First Stage: Life is assimilated to activity in general

  • Second Stage: Life is assimilated to movement

  • Third Stage: Life is assimilated to spontaneous movement

  • Fourth Stage: Life is restricted to animals and plants

These distinctions are suggestive, but they invite much more discussion than Piaget elicits from his child subjects. What is required for movement to be spontaneous? Is a bear alive during hibernation? We may suppose the Venus flytrap moves spontaneously. But does it really? What about other plants? And then there is the question of what Piaget can mean by calling the thinking of young children “animistic,” if, at their stage of cognitive development, their idea of life is simply “assimilated to activity in general.”

Donaldson (1978) offers a psychological critique of Piaget on cognitive development. A philosophical critique of Piaget’s work on cognitive development is to be found in Chapters 3 and 4 of Matthews (1994). Interesting post-Piagetian work in cognitive development includes Cary (1985), Wellman (1990), Flavel (1995), Subbotsky (1996), and Gelman (2003).

Recent psychological research on concept formation has suggested that children do not generally form concepts by learning necessary and sufficient conditions for their application, but rather by coming to use prototypical examples as reference guides. Thus a robin (rather, of course, than a penguin) might be the child’s prototype for ‘bird’. The child, like the adult, might then be credited with having the concept, bird, without the child’s ever being able to specify, successfully, necessary and sufficient conditions for something to count as a bird. This finding seems to have implications for the proper role and importance of conceptual analysis in philosophy. It is also a case in which we should let what we come to know about cognitive development in children help shape our epistemology, rather than counting on our antecedently formulated epistemology to shape our conception of cognitive development in children (see Rosch and Lloyd, 1978, and Gelman, 2003).

Some developmental psychologists have recently moved away from the idea that children are to be understood primarily as human beings who lack the capacities adults of their species normally have. This change is striking in, for example, the work of Alison Gopnik, who writes: “Children aren’t just defective adults, primitive grownups gradually attaining our perfection and complexity. Instead, children and adults are different forms of homo sapiens. They have very different, though equally complex and powerful, minds, brains, and forms of consciousness, designed to serve different evolutionary functions” (Gopnik, 2009, 9). Part of this new respect for the capacities of children rests on neuroscience and an increased appreciation for the complexity of the brains of infants and young children. Thus Gopnik writes: “Babies’ brains are actually more highly connected than adult brains; more neural pathways are available to babies than adults.” (11)

3. Theories of Moral Development

Many philosophers in the history of ethics have devoted serious attention to the issue of moral development. Thus Plato, for example, offers a model curriculum in his dialogue, Republic, aimed at developing virtue in rulers. Aristotle’s account of the logical structure of the virtues in his Nicomachean Ethics provides a scaffolding for understanding how moral development takes place. And the Stoics (Turner and Matthews, 1998, 45–64) devoted special attention to dynamics of moral development.

Among modern philosophers, it is again Rousseau (1979) who devotes the most attention to issues of development. He offers a sequence of five age-related stages through which a person must pass to reach moral maturity: (i) infancy (birth to age 2); (ii) the age of sensation (3 to 12); (iii) the age of ideas (13 to puberty); (iv) the age of sentiment (puberty to age 20); and (v) the age of marriage and social responsibility (age 21 on). Although he allows that an adult may effectively modify the behavior of children by explaining that bad actions are those that will bring punishment (90), he insists that genuinely moral reasoning will not be appreciated until the age of ideas, at 13 and older. In keeping with his stage theory of moral development he explicitly rejects Locke’s maxim, ‘Reason with children,’ (Locke, 1971) on the ground that attempting to reason with a child younger than thirteen years of age is developmentally inappropriate.

However, the cognitive theory of moral development formulated by Piaget in The Moral Judgment of the Child (1965) and the somewhat later theory of Lawrence Kohlberg (1981, 1984) are the ones that have had most influence on psychologists, educators, and even philosophers. Thus, for example, what John Rawls has to say about children in his classic work, A Theory of Justice (1971) rests heavily on the work of Piaget and Kohlberg.

Kohlberg presents a theory according to which morality develops in approximately six stages, though according to his research, few adults actually reach the fifth or sixth stages. In this respect Kohlberg’s theory departs from classic stage theory, as in Piaget, since the sequence of stages does not culminate in the capacity shared by normal adults. However, Kohlberg maintained that no one skips a stage or regresses to an earlier stage. Although Kohlberg sometimes considered the possibility of a seventh or eighth stage, these are his basic six:

  • Level A. Premoral

    • Stage 1—Punishment and obedience orientation

    • Stage 2—Naive instrumental hedonism

  • Level B. Morality of conventional role conformity

    • Stage 3—Good-boy morality of maintaining good relations, approval by others

    • Stage 4—Authority-maintaining morality

  • Level C. Morality of accepted moral principles

    • Stage 5—Morality of contract, of individual rights and democratically accepted law

    • Stage 6—Morality of individual principles of conscience

Kohlberg developed a test, which has been widely used, to determine the stage of any individual at any given time. The test requires responses to ethical dilemmas and is to be scored by consulting an elaborate manual.

One of the most influential critiques of the Kohlberg theory is to be found in Carol Gilligan’s In a Different Voice (1982). Gilligan argues that Kohlberg’s rule-oriented conception of morality has an orientation toward justice, which she associates with stereotypically male thinking, whereas women and girls are perhaps more likely to approach moral dilemmas with a “care” orientation. One important issue in moral theory that the Kohlberg-Gilligan debate raises is that of the role and importance of moral feelings in the moral life (see the entry on feminist ethics).

Another line of approach to moral development is to be found in the work of Martin Hoffman (1982). Hoffman describes the development of empathetic feelings and responses in four stages. Hoffman’s approach allows one to appreciate the possibility of genuine moral feelings, and so of genuine moral agency, in a very small child. By contrast, Kohlberg’s moral-dilemma tests will assign pre-schoolers and even early elementary-school children to a pre-moral level.

A philosophically astute and balanced assessment of the Kohlberg-Gilligan debate, with appropriate attention to the work of Martin Hoffman, can be found in Pritchard (1991). See also Friedman (1987), Likona (1976), Kagan and Lamb (1987), and Pritchard (1996).

4. Children’s Rights

For a full discussion of children’s interests and children’s rights see the entry on the rights of children.

5. Childhood Agency and Autonomy

Clearly children are capable of goal-directed behavior while still relatively young, and are agents in this minimal sense. Respect for children’s agency is provided in legal and medical contexts, in that children who are capable of expressing their preferences are frequently consulted, even if their views are not regarded as decisive for determining outcomes.

The exercise of childhood agency will obviously be constrained by social and political factors, including various dependency relations, some of them imposed by family structures. Whether there are special ethical rules and considerations that pertain to the family in particular, and, if so, what these rules or considerations are, is the subject of an emerging field we can call ‘family ethics’ (Baylis and Mcleod 2014, Blustein, 1982, Brighouse and Swift 2014, Houlgate, 1980, 1999).

The idea that, in child-custody cases, the preferences of a child should be given consideration, and not just the “best interest” of the child, is beginning to gain acceptance in the U.S., Canada and Europe. “Gregory K,” who at age 12 was able to speak rationally and persuasively to support his petition for new adoptive parents, made a good case for recognizing childhood agency in a family court. (See “Gregory Kingsley” in the Other Internet Resources.) Less dramatically, in divorce proceedings, older children are routinely consulted for their views about proposed arrangements for their custody.

Perhaps the most wrenching cases in which adults have come to let children play a significant role in deciding their own future are those that involve treatment decisions for children with terminal illnesses. (Kopelman and Moskop, 1989) The pioneering work of Myra Bluebond-Langner shows how young children can come to terms with their own imminent death and even conspire, mercifully, to help their parents and caregivers avoid having to discuss this awful truth with them (Bluebond-Langner, 1980).

While family law and medical ethics are domains in which children capable of expressing preferences are increasingly encouraged to do so, there remains considerable controversy within philosophy as to the kind of authority that should be given to children’s preferences. There is widespread agreement that most children’s capacity to eventually become autonomous is morally important and that adults who interact with them have significant responsibility to ensure that this capacity is nurtured (Feinberg 1980). At the same time it is typical for philosophers to be skeptical about the capacity for children under the age of ten to have any capacity for autonomy, either because they are judged not to care stably about anything (Oshana 2005, Schapiro 1999), lack information, experience and cognitive maturity (Levinson 1999, Ross 1998), or are too poor at critical reflection (Levinson 1999).

Mullin (2007, 2014) argues that consideration of children’s capacity for autonomy should operate with a relatively minimal understanding of autonomy as self-governance in the service of what the person cares about (with the objects of care conceived broadly to include principles, relationships, activities and things). Children’s attachment to those they love (including their parents) can therefore be a source of autonomy. When a person, adult or child, acts autonomously, he or she finds the activity meaningful and embraces the goal of the action. This contrasts both with a lack of motivation and with feeling pressured by others to achieve outcomes desired by them. Autonomy in this sense requires capacities for impulse control, caring stably about some things, connecting one’s goals to one’s actions, and confidence that one can achieve at least some of one’s goals by directing one’s actions. It does not require extensive ability to engage in critical self-reflection, or substantive independence. The ability to act autonomously in a particular domain will depend, however, on whether one’s relationships with others are autonomy supporting. This is in keeping with feminist work on relational autonomy. See the entry on Feminist Perspectives on Autonomy.

Children’s autonomy is supported when adults give them relevant information, reasons for their requests, demonstrate interest in children’s feelings and perspectives, and offer children structured choices that reflect those thoughts and feelings. Support for children’s autonomy in particular domains of action is perfectly consistent with adults behaving paternalistically toward them at other times and in other domains, when children are ill-informed, extremely impulsive, do not appreciate the long-term consequences of their actions, cannot recognize what is in their interest, cannot direct their actions to accord with their interests, or are at risk of significant harm (Mullin 2014).

6. The Goods of Childhood

“Refrigerator art,” that is, the paintings and drawings of young children that parents display on the family’s refrigerator, is emblematic of adult ambivalence toward the productions of childhood. Typically, parents are pleased with, and proud of, the art their children produce. But equally typically, parents do not consider the artwork of their children to be good without qualification. Yet, as Jonathan Fineberg has pointed out (Fineberg, 1997, 2006), several of the most celebrated artists of the 20th century collected child art and were inspired by it. It may be that children are more likely as children to produce art, the aesthetic value of which a famous artist or an art historian can appreciate, than they will be able to later as adults.

According to what we have called the “Aristotelian conception”, childhood is an essentially prospective state. On such a view, the value of what a child produces cannot be expected to be good in itself, but only good for helping the child to develop into a good adult. Perhaps some child art is a counterexample to this expectation. Of course, one could argue that adults who, as children, were encouraged to produce art, as well as make music and excel at games, are more likely to be flourishing adults than those who are not encouraged to give such “outlets” to their energy and creativity. But the example of child art should at least make one suspicious of Michael Slote’s claim that “just as dreams are discounted except as they affect (the waking portions of) our lives, what happens in childhood principally affects our view of total lives through the effects that childhood success or failure are supposed to have on mature individuals” (Slote, 1983, 14).

Recent philosophical work on the goods of childhood (Brennan 2014, Macleod 2010) stresses that childhood should not be evaluated solely insofar as it prepares the child to be a fully functioning adult. Instead, a good childhood is of intrinsic and not merely instrumental value. Different childhoods that equally prepare children to be capable adults may be better or worse, depending on how children fare qua children. Goods potentially specific to childhood (or, more likely, of greatest importance during childhood) include opportunities for joyful and unstructured play and social interactions, lack of significant responsibility, considerable free time, and innocence, particularly sexual innocence. Play, for instance, can be of considerable value not only as a means for children to acquire skills and capacities they will need as adults, but also for itself, during childhood.

7. Philosophical Thinking in Children

For a full discussion of this topic see the entry on Philosophy for Children.

8. Moral Status of Children

It is uncontroversial to judge that what Mary Anne Warren terms paradigmatic humans have moral status (Warren 1992). Paradigmatic humans are adults with relatively standard cognitive capacities for self-control, self-criticism, self-direction, and rational thought, and are capable of moral thought and action. However, the grounds for this status are controversial, and different grounds for moral status have direct implications for the moral status of children. Jan Narveson (1988), for instance, argues that children do not have moral status in their own right because only free rational beings, capable of entering into reciprocal relations with one another, have fundamental rights. While Narveson uses the language of rights in his discussion of moral status (people have direct moral duties only to rights holders on his account), moral status need not be discussed in the language of rights. Many other philosophers recognize children as having moral status because of their potential to become paradigmatic humans without committing to children having rights. For instance, Allen Wood writes: “it would show contempt for rational nature to be indifferent to its potentiality in children.” (Wood 1998, 198)

When children are judged to have moral status because of their potential to develop the capacities of paradigmatic adults (we might call these paradigmatic children), this leaves questions about the moral status of those children who are not expected to live to adulthood, and those children whose significant intellectual disabilities compromise their ability to acquire the capacities of paradigmatic adults. There are then three common approaches that grant moral status to non-paradigmatic children (and other non-paradigmatic humans). The first approach deems moral consideration to track species membership. On this approach all human children have moral status simply because they are human (Kittay 2005). This approach has been criticized as being inappropriately speciesist, especially by animal rights activists. The second approach gives moral status to children because of their capacity to fare well or badly, either on straightforwardly utilitarian grounds or because they have subjective experiences (Dombrowski 1997). It has been criticized by some for failing to distinguish between capacities all or almost all human children have that are not also possessed by other creatures who feel pleasure and pain. The third approach gives moral status to non-paradigmatic children because of the interests others with moral status take in them (Sapontzis 1987), or the relationships they have with them (Kittay 2005)

Sometimes the approaches may be combined. For instance Warren writes that young children and other non-paradigmatic humans have moral status for two sorts of reasons: “their rights are based not only on the value which they themselves place upon their lives and well-being, but also on the value which other human beings place on them.” (1992. 197) In addition to these three most common approaches, Mullin (2011) develops a fourth: some non-paradigmatic children (and adults) have moral status not simply because others value them but because they are themselves capable of being active participants in morally valuable relationships with others. These relationships express care for others beyond their serving as means for one’s own satisfaction. Approaches to moral status that emphasize children’s capacity to care for others in morally valuable relationships also raise interesting questions about children’s moral responsibilities within those relationships (see Mullin 2010).

For more on this topic see the entry on the grounds of moral status.

9. Other Issues

The topics discussed above hardly exhaust the philosophy of childhood. Thus we have said nothing about, for example, philosophical literature on personhood as it bears on questions about the morality of abortion, or bioethical discussions about when it is appropriate for parents to consent to children’s participation in medical research or refuse medical treatment of their children. There has been increasing attention in recent years to questions about the appropriate limits of parental authority over children, about the source and extent of parents and the state’s responsibilities for children, and about the moral permissibility of parents devoting substantial resources to advancing the life prospects of their children. These and many other topics concerning children may be familiar to philosophers as they get discussed in other contexts. Discussing them under the rubric, ‘philosophy of childhood,’ as well in the other contexts, may help us see connections between them and other philosophical issues concerning children.

Advertisements

What is an Adult Human? / Biology Law Psychology Culture

Photo from Duke Health – group of 10-13 year olds. Biologically, they are adults. Legally they are not. Culturally? Psychologically? Big Questions.

Biological adulthood Wikipedia

Historically and cross-culturally, adulthood has been determined primarily by the start of puberty (the appearance of secondary sex characteristics such as menstruation in women, ejaculation in men, and pubic hair in both sexes). In the past, a person usually moved from the status of child directly to the status of adult, often with this shift being marked by some type of coming-of-age test or ceremony.[1]

After the social construct of adolescence was created, adulthood split into two forms: biological adulthood and social adulthood. Thus, there are now two primary forms of adults: biological adults (people who have attained reproductive ability, are fertile, or who evidence secondary sex characteristics) and social adults (people who are recognized by their culture or law as being adults). Depending on the context, adult can indicate either definition.

Although few or no established dictionaries provide a definition for the two word term biological adult, the first definition of adult in multiple dictionaries includes “the stage of the life cycle of an animal after reproductive capacity has been attained”.[2][3] Thus, the base definition of the word adult is the period beginning at physical sexual maturity, which occurs sometime after the onset of puberty. Although this is the primary definition of the base word “adult”, the term is also frequently used to refer to social adults. The two-word term biological adult stresses or clarifies that the original definition, based on physical maturity, is being used.

In humans, puberty on average begins around 10–11 years of age for girls and 11–12 years of age for boys, though this will vary from person to person. For girls, puberty begins around 10 or 11 years of age and ends around age 16. Boys enter puberty later than girls – usually around 12 years of age and it lasts until around age 16 or 17 (Or in rare cases 18 and a half).[4][5]

There seems to be disagreement on the attainment of adulthood: is it at the start or completion of puberty?

More from Duke Health: https://www.dukehealth.org/blog/when-puberty-too-early

When Is Puberty Too Early?

October 01, 2013

Early Puberty in Girls

For girls, puberty is generally considered to be too early if it begins at age seven or eight. African-American and Hispanic girls tend to start puberty slightly earlier than Caucasian girls. The average age of pubertal onset in girls is 10-and-a-half years old, but it ranges from seven to 13 years old. The average age of menarche is 12-and-a-half to 13 years of age. The whole process of puberty should take three to four years.

Rapidly progressing puberty — start to finish in less than two years — can be a concern as well because it can be due to an endocrine disorder

Early Puberty in Boys

For boys, puberty is generally considered too early before the age of nine years. In boys, onset of puberty is from nine to 14 years, but on average starts at 11-and-a-half to 12 years old. The whole process of puberty should take three to four years. Rapidly progressing puberty can also be a concern in males

Preventing Early Puberty

While genetic factors play a role in the early onset of puberty, parents can help delay the environmental causes of early puberty. Preventive measures include:

  • Encourage your child to maintain a healthy weight.
  • Avoid exposure to exogenous hormones like estrogen, testosterone, DHEA, androstenedione that may be found in creams/gels, hair treatments, medications, and nutritional supplements. (And who knows where else these powerful hormones are being used and entering environmental systems)

 Psychological Adulthood? 

Here is where we encounter the perils of “socially constructed” opinion about human development: What a mess!

Psychological development

Written By: The Editors of Encyclopedia Britannica

Psychological development, the development of human beings’ cognitive, emotional, intellectual, and social capabilities and functioning over the course of the life span, from infancy through old age. It is the subject matter of the discipline known as developmental psychology. Child psychology was the traditional focus of research, but since the mid-20th century much has been learned about infancy and adulthood as well. A brief treatment of psychological development follows. For full treatment, see human behaviour.

Infancy is the period between birth and the acquisition of language one to two years later.

Childhood is the second major phase in human development, childhood, extends from one or two years of age until the onset of adolescence at age 12 or 13.

Adolescence Physically, adolescence begins with the onset of puberty at 12 or 13 and culminates at age 19 or 20 in adulthood.

Hmmm…. a discrepancy of 7-8 YEARS between biological and psychological demarcation for the beginning of adulthood, that is, IF adulthood is the onset of puberty. IF it’s the completion of puberty – the discrepancy is more like 4-5 years.

But! We now have a serious problem: the socially constructed stage called adolescence, interferes with, and contradicts, the biological transition from pre-reproductive childhood, to reproductive adult with no clear transition at all. The result is chaos in education, legal jurisdiction, sex-reproduction-parenting, health, nutrition and behavioral expectations!

Adulthood is a period of optimum mental functioning when the individual’s intellectual, emotional, and social capabilities are at their peak to meet the demands of career, marriage, and children. Some psychologists delineate various periods and transitions in early to middle adulthood that involve crises or reassessments of one’s life and result in decisions regarding new commitments or goals. During the middle 30s people develop a sense of time limitation, and previous behaviour patterns or beliefs may be given up in favour of new ones.

Wow! Just how does a person between the ages of 10-20 years old negotiate this bizarre disconnect between a developmental paradigm “invented” by psychologists, and the physical reality of the human body?

One might expect individual cultures to “help” with this vital transition… 

Cultural Adulthood? 

How the American legal system defines adult status is a crucial cultural factor.  

Adult: A person who by virtue of attaining a certain age, generally eighteen, is regarded in the eyes of the law as being able to manage his or her own affairs.

Wow! Highly optimistic and unrealistic in American culture, which overwhelmingly advocates for the indefinite postponement of adulthood… 

Note that American education does little to nothing to prepare children, adolescents, and now “emerging adults” (a new category of underdeveloped Homo sapiens that is MEASURED BY the subjective “feeling” of being adult) for these sudden legal and financial facts of life.  This dithering over adult status is the “privilege” of the wealth classes; poor and minority children too often become “instant adults” – in a jail cell.  

The age specified by law, called the legal age of majority, indicates that a person acquires full legal capacity to be bound by various documents, such as contracts and deeds, that he or she makes with others and to commit other legal acts such as voting in elections and entering marriage. The age at which a person becomes an adult varies from state to state and often varies within a state, depending upon the nature of the action taken by the person. Thus, a person wishing to obtain a license to operate a motor vehicle may be considered an adult at age sixteen, but may not reach adulthood until age eighteen for purposes of marriage, or age twenty-one for purposes of purchasing intoxicating liquors.

Anyone who has not reached the age of adulthood is legally considered an infant. (!! Really?) West’s Encyclopedia of American Law, edition 2. Copyright 2008 The Gale Group, Inc. All rights reserved.

 

 

 

A Cheery Look at Childhood in Western Cultures / PSYCHOHISTORY

Lloyd deMause, pronounced de-Moss is an American social thinker known for his work in the field of psychohistory. Wikipedia

Born: September 19, 1931 (age 86), Detroit, MI Education: Columbia University

FOUNDATIONS OF
PSYCHOHISTORY
by LLOYD DEMAUSE

The history of childhood is a nightmare from which we have only recently begun to awaken. The further back in history one goes, the lower the level of child care, and the more likely children are to be killed, abandoned, beaten, terrorized, and sexually abused. It is our task here to see how much of this childhood history can be recaptured from the evidence that remains to us.

That this pattern has not previously been noticed by historians is because serious history has long been considered a record of public not private events. Historians have concentrated so much on the noisy sand-box of history, with its fantastic castles and magnificent battles, that they have generally ignored what is going on in the homes around the playground. And where historians usually look to the sandbox battles of yesterday for the causes of those of today, we instead ask how each generation of parents and children creates those issues which are later acted out in the arena of public life.

At first glance, this lack of interest in the lives of children seems odd. Historians have been traditionally committed to explaining continuity and change over time, and ever since Plato it has been known that child-hood is a key to this understanding. The importance of parent-child relations for social change was hardly discovered by Freud; St. Augustine’s cry, “Give me other mothers and I will give you another world,” has been echoed by major thinkers for fifteen centuries without affecting historical writing. Since Freud, of course, our view of childhood has acquired a new dimension, and in the past half century the study of childhood has become routine for the psychologist, the sociologist, and the anthropologist. It is only beginning for the historian. Such determined avoidance requires an explanation.

Full PDF: http://psychohistory.com/books/foundations-of-psychohistory/chapter-1-the-evolution-of-childhood/

 

Physical Education and Sport / Ancient Times to Enlightenment

EUROPEAN JOURNAL OF EDUCATIONAL RESEARCH / Vol. 2, No. 4, 191-202 / ISSN 2165-8714 Copyright © 2013 EUJER

“Bikini Girls” exercising, Sicily, 4th C. AD

https://files.eric.ed.gov/fulltext/EJ1086323.pdf

Harmandar Demirel & Yıldıran / Dumlupinar University, Gazi University, Turkey

(I’ve broken the text into shorter paragraphs for easier reading and omitted some introductory material. Complete pdf is about 8 pages. I’ve high-lightened a few main ideas and vocabulary.)

My general comment is that American Public Education is essentially less “sophisticated” than even Ancient Greece and Rome; a disgrace and “Medieval”…

An Overview from the Ancient Age to the Renaissance

The Greek educational ideal which emerged during the 8th – 6th centuries B.C. aimed at developing general fitness via “gymnastics” and the “music” of the body; that is, the development of body and spirit in a harmonic body and, in this way, providing a beautiful body, mental development and spiritual and moral hygiene. These are expressed by the word Kalokagathia, meaning both beautiful and good, based on the words “Kalos” and “Agathos” (Aytaç, 1980; Alpman, 1972). Thus, the use of physical training and sport as the most suitable means as discussed first in Ancient Greece (Yildiran, 2005). To achieve the ideal of kalokagathia, three conditions were required: nobility, correct behaviour and careful teaching (Yildiran, 2011). Physical beauty (kalos) did not refer just to external appearance; it also referred to mental health. Humans who had these qualifications were considered ideal humans (kalokagathos) (Bohus, 1986). The idea of the Kalokagathia ideal, which was developed during the early classical age, had seen archaic-aristocratic high value “arete”s thinned and deepened (Popplow, 1972).

The vital point of aristocratic culture was physical training; in a sense, it was sport. The children were prepared for various sport competitions under the supervision of a paidotribes (a physical education teacher) and learned horse riding, discus and javelin throwing, long jumping, wrestling and boxing. The aim of the sport was to develop and strengthen the body, and hence, the character (Duruskken, 2001). In Ancient Greece, boys attended wrestling schools because it was believed that playing sports beautified the human spirit as well as the body (Balcı, 2008). The palaestra was a special building within ancient gymnasiums where wrestling and physical training were practiced (Saltuk, 1990). The education practiced in this era covered gymnastic training and music education, and its aim was to develop a heroic mentality, but only for royalty. With this goal in mind, education aimed to discipline the body, raising an agile warrior by developing a cheerful and brave spirit (Aytac, 1980).

The feasts which were held to worship the gods in Ancient Greece began for the purpose of ending civil wars. All sport-centred activities were of religious character. As the ancient Olympic Games were of religious origin, they were conducted in Olympia. (Home of the gods) Over time, running distances increased, new and different games were added to the schedule, soldiers began to use armour in warfare, art and philosophy were understood better and great interest was shown in the Olympic Games; therefore, the program was enriched and changed, and the competitions were increased from one to five days (Er et al., 2005). However, the active or passive attendance of married women was banned at the ancient Olympic Games for religious reasons (Memis and Yıldıran, 2011). The Olympic Games had an important function as one of the elements aimed at uniting the ancient Greeks culturally, but this ended when the games were banned by Emperor Theodosius 1st in 393-4 A.D. (Balci, 2008).

Sparta, which is located in the present-day Mora peninsula, was an agricultural state that had been formed by the immigration of Dors from the 8th century B.C. Spartan education provided an extremely paternalistic education, which sought the complete submergence of the individual in the citizen and provided him with the attributes of courage, complete obedience and physical perfection (Cordasco, 1976). In Sparta, where the foundations of social order constituted iron discipline, military proficiency, strictness and absolute obedience, the peaceful stages of life had the character of a “preparation for the war school” (Aytac, 1980). The essential thing that made Hellenic culture important was its gaining new dimensions with distinctive creative power regarding cultural factors that this culture had adopted from the ancient east, and its revealing of the concept of the “perfect human” (Iplikcioglu, 1997).

Children stayed with their family until they were seven years old; from this age, they were assigned to the state-operated training institutes where they were trained strictly in war and state tasks. Strengthening the body and preparing for war took a foremost place in accordance with the military character of the state. Girls were also given a strict military training (Aytac, 1980). The same training given to the boys was also given to the girls. The most prominent example of this is the girls and boys doing gymnastics together (Russel, 1969). Although physical training and music education were included, reading, writing and arithmetic were barely included in Spartan education (Binbasioglu, 1982).

Unlike Sparta, the classical period of Athenian democracy (Athens had advanced trade and industry) included the Persian Wars and Peloponnese Wars, and Cleisthenes’ democratic reforms and the ending of sea domination in domestic policy. As this democracy covered “the independent layer”, it took the form of an “aristocratic democracy” (Aytaç, 1980). Learning was given great importance in the Athenian democracy. The sons of independent citizens received education in grammar and at home or private school. Music education and gymnastic training were carried out in “Gymnasiums” and “Palestrae”, which were built and controlled by the state; running areas were called “Dramos”, and chariot race areas were termed “Hippodromes” (Aytac, 1980). Children older than 12 years started receiving sports training and music education in Athens, where the military training was barely included.

Athenians insisted on the aesthetical and emotional aspects of education. Therefore, the best art works of the ancient world were created in this country (Binbasioglu, 1982). As in the 5th century B.C., Greek education was unable to appropriately respond to new developments; Sophists emphasised the development of traditional education in terms of language and rhetoric in an attempt to overcome the crisis. Sophists provided education in the morals, law, and the natural sciences in addition to the trivium, grammar, rhetoric, dialectic) (Aytac, 1980).

Greeks considered physical training prudent and important because it developed the body and organised games conducive to the gathering of large crowds; in these games, all regions of Greece were represented (Balci, 2008). Rome constitutes the second most important civilisation of the Ancient age. In Rome, the family played the strongest role in education, and the state did not have much say or importance. While exercise constituted the means of education in Ancient Rome, the purpose of this education was “to raise a good citizen”, such that each person had a skilled, righteous and steady character. Physical training was provided in addition to courses such as mythology, history, geography, jurisprudence, arithmetic, geometry and philosophy; this training was provided in Grammar schools, where basic teaching covered the “Seven free arts” (Aytac, 1980).

Due to the Scholastic structure of the Middle Ages, values respecting the human were forgotten. However, the “Renaissance” movement, which started in Europe and whose ideas inform the modern world, developed many theories related to education and physical training and attempted to apply this in various ways; the development of these ideas was continued in “The Age of Enlightenment”.

The Renaissance General Aspects of the Renaissance

The word renaissance means “rebirth”; in this period, artists and philosophers tried to discover and learn the standards of Ancient Rome and Athens (Perry et al., 1989). In the main, the Renaissance represented a protest of individualism against authority in the intellectual and social aspects of life (Singer, 1960). Renaissance reminded “Beauty’’ lovers of the development of a new art and imagination. From the perspective of a scientist, the Renaissance represented innovation in ancient sciences, and from the perspective of a jurist, it was a light shining over the shambles of old traditions.

Human beings found their individuality again during this era, in which they tried to understand the basics of nature and developed a sense of justice and logic. However, the real meaning of “renaissance” was to be decent and kind to nature (Michelet, 1996). The Renaissance was shaped in Italy beginning from the 1350s as a modern idea contradicting the Middle Ages. The creation of a movement for returning to the old age with the formidable memories of Rome naturally seemed plausible (Mcneill, 1985). New ideas that flourished in the world of Middle Age art and developed via various factors did not just arise by accident; incidents and thoughts that developed in a social context supported it strongly (Turani, 2003). Having reached its climax approximately in the 1500s, the Italian Renaissance constituted the peak of the Renaissance; Leonardo da Vinci observed the outside world, people and objects captiously via his art and Niccolo Machiavelli’s drastically analysed nature and use of politics through his personal experiences and a survey of classical writers (Mcneill, 1985).

The Concept of Education and Approaches to Physical Training during the Renaissance

The humanist education model, which was concordant with the epitomes of the Renaissance, was a miscellaneous, creative idea. Its goal was to create an all-round advanced human being, “homo universale”. At the same time, such an educational epitome necessarily gained an aristocratic character. This educational epitome no longer provided education to students at school (Aytac, 1980).

In 14th century, the “humanist life epitome” was claimed. The humanism movement was gradually developing and spreading; however, in this phase, humanism-based formation or practice was not in question. In the history of humanity, the humanism period has been acknowledged as a ‘transitional period’. Modern civilisation and education is based on this period. Philosophers, such as Erasmus, Rabelais, Montaigne and Luther, flourished during this period. Universities began to multiply, and latitudinarianism was created. Scholastic thought was shaken from its foundations at the beginning of this period via the influence of Roger Bacon (scientist), who lived during the 13th Century.

Original forms of works constituting the culture of Ancient Athens and Rome were found, read, and recreated concordantly; moreover, the ideas of latitudinarian, old educators such as Quintilianus were practiced. In teaching methods, formulae enabling pupils to improve their skills and abilities were adopted. Students started to learn outdoors, in touch with nature. Strict disciplinary methods gave way to rather tolerant methods. The importance and value of professional education were acknowledged (Binbasioglu, 1982). Positive sciences, such as history, geography and natural history were not given a place in the classroom for a long time, but Latin preserved its place until recent times (Aytac, 1980).

With Desiderius von Erasmus, who was alive during the height of European humanism, humanism adopted its first scientific principle: “Return to sources!’’; for this reason, the works of ancient writers were published. Erasmus’ educational epitome consists of a humanist-scientific formulation; however, it does not externalise the moral-religious lifestyle. Having worked to expand humanity into higher levels, Erasmus summarises the conditions for this quest as follows: good teachers, a useful curriculum, good pedagogical methods, and paying attention to personal differences among pupils. With these ideas, Erasmus represents the height of German humanist pedagogy (Aytaç, 1980).

Notice the antagonistic set up between faith and science we still experience today in the U.S.?

On the other hand, Martin Luther considered universities as institutions where “all kinds of iniquity took place, there was little faith to sacred values, and the profane master Aristotle was taught imprudently” and he demanded that schools and especially universities be inspected. Luther thought that schools and universities should teach religiously inclined youth in a manner heavily dependent on the Christian religion (Aytac, 1980). Alongside these ideas, Luther made statements about the benefits of chivalric games and training, and of wrestling and jumping to health, which, in his opinion, could make the body more fit (Alpman, 1972).

The French philosopher Michel de Montaigne, known for his “Essays”, was a lover of literature who avoided any kind of extreme and was determined, careful and balanced. In his opinion, the aim of education was to transfer “ethical and scientific knowledge via experiments’’ to pupils. De Montaigne believed that a person’s skills and abilities in education, which can be called natural powers, are more important than or even superior to logic and society (Binbasioglu, 1982). The Humanist movement has played a very significant role in educational issues. This movement flourished in order to resurrect the art and culture of ancient Athens and Rome with their formidable aspects, thereby enabling body and soul to improve concordantly with the education of humans (Alpman, 1972). Humanism was not a philosophical system but a cultural and educational program (Kristeller, 1961).

Note that in the United States, current public education is obsessed with “social engineering” based on two religious ideologies: (1. liberal / puritanical – (social and psychological theory-based; conformity to prescriptive “absolutes” of human behavior.) 2.  evangelical – anti-science, faith-based denial of reality; socio-emotional fervor.) These competing religious systems have replaced a brief period of “humanist” academic emphasis; the arts and physical education have been jettisoned, supposedly due to “budget” limitations… but this elimination of “expressions of individual human value” is a choice made by parents and educators to “ban” secular ideals from education)  

The necessity of physical training along with education of soul and mind has been emphasised; for this reason, physical practices and games have been suggested for young people. It is possible to see how the humanists formed the foundations of the Renaissance, beginning from the 14th century to the 18th century and working from Italy to Spain, Germany, France and England. Almost all of the humanists stated the significance of physical training in their written works on education (Alpman, 1972).

One of the humanists, Vittorino da Feltre may have viewed it as the most pleasant goal of his life to raise a group of teenagers and fed and educated poor but talented children at his home (Burckhardt, 1974). Feltre practiced a classical education in his school called “Joyful Residence”. In accord with Ancient Greek education concepts, he claimed that benefits were provided by the education of body and soul through daily exercises such as swimming, riding and swordplay, and generating love towards nature via hiking; he also emphasised the importance of games and tournaments (Alpman, 1972; Aytac, 1980). Enea Silvio de Piccolomini is also worthy of attention; alongside his religious character, he thought that physical training should be emphasised and that beauty and power should be improved in this way (Alpman, 1972). de Piccolomini attracted attention to the importance of education as a basis for body and soul while stressing the importance of avoiding things that cause laxity, games and resting (Aytac, 1980). Juan Ludwig Vives, a systematic philosopher who had multiple influences, in one of his most significant works “De Tradendis Disciplinis”, which was published in 1531, advised such practices as competitive ball playing, hiking, jogging, wrestling and braggartism, beginning from the age of 15 (Alpman, 1972).

The German humanist Joachim Camerarius, who managed the academic gymnasium in the city of Nürnberg, is also very important in relation to this subject. Having practicing systematic physical training at the school in which he worked, Camerarius wrote his work, “Dialogus de Cymnasis”, which refers to the pedagogical and ethical values of Greek gymnastics. In this work, he stressed such practices as climbing, jogging, wrestling, swordplay, jumping, stone throwing and games that were practiced by specially selected children according to their ages and physical abilities, all under the supervision of experienced teachers (Alpman, 1972). The Italian Hieronymus Mercurialis’ De Arte Gymnastica, first published in Latin in Venice in 1569, contained very little on the Olympic Games. Indeed, the author was hostile to the idea of competitive athletics. The Frenchman Petrus Faber’s Agonisticon (1592), in its 360 pages of Latin text, brought together in one place many ancient texts concerning the Olympics but was disorganised, repetitive and often unclear (Lee, 2003). The first part of the De Arte Gymnastica included the definition of Ancient Greek gymnastics and an explanation of actual terminology whereas the second part contained precautions about the potential harms of exercises practiced in the absence of a doctor. Moreover, he separated gymnastics practised for health reasons from military gymnastics (Alpman, 1972).

Note the military requirement for it’s personnel to be “physically fit” compared to the general U.S. population, (including children), which is chronically obese, sedentary and unhealthy. “Being physically fit” (at least the appearance of) is now a status symbol of the wealth classes and social celebrities, requiring personal trainers, expensive spa and gym facilities, and high-tech gadgets and equipment.    

The Transition to the Age of Enlightenment: Reformation, Counter-reformation and the Age of Method

The Age of Reformation: The most significant feature of European cultural life during this age was the dominant role played by religious issues, unlike the Renaissance in Italy (Mcneill, 1985). This age symbolises the uprising of less civilised societies against logic-dominated Italy (Russell, 2002). Bearing a different character from Renaissance and Humanism, the Reformation did not stress improvements in modern art or science, but rather improvements in politics and the Church; consonant with this, its education epitome emphasised being religious and dependent on the Church. Nevertheless, both Humanism and the Reformation struggled against Middle Ages scholasticism, and both appreciated the value of human beings (Aytac, 1980).

The Counter-reformation Movement: In this period, which includes the movement of the Catholic church to retake privileges that it had lost due to the Reformation, the “Jesuit Sect’’ was founded to preach, confess and collect “perverted minds’’ once again under the roof of the Catholic church via teaching activities (Aytac, 1980).

The Age of Method: Also known as the Age of Practice, this period saw efforts to save people from prejudice, and principles for religion, ethics, law and state were sought to provide systematic knowledge in a logic-based construction. Aesthetic educational approaches, which were ignored by religion and the Church because of the attitudes prevailing during the Reformation and Counterreformation, were given fresh emphasis. Bacon, Locke, Ratke, Komensky, Descartes and Comenius are among the famous philosophers who lived during this period (Aytac, 1980).

The Age of Enlightenment General Features and Educational Concepts of the Enlightenment

The Enlightenment Period had made itself clear approximately between 1680 and 1770 or even 1780. Science developed into separate disciplines, literature became an independent subject, and it was demanded that history also become independent (Chaunu, 2000). During this period, educators transformed the concept of education from preparing students for the afterlife into preparing them for the world around them, so that they could be free and enlightened.

Moreover, educators of the period were usually optimistic and stressed the importance of study and work. At school, students were educated in such a way as to engrain a love of nature and human beings. Based on these ideas, learning was undertaken by experiment and experience (Binbasioglu, 1982). William Shakespeare mentioned the concept of “Fair Play” and the ideas of “maintain equality of opportunity” and “show the cavalier style of thinking” at the end of the 16th century; by the 18th century, these ideas were included in sport (Gillmeister, 1988). Systematic changes in the foundations of the principles of fair play that occurred in the 19th century were directly related to the socio-cultural structure of Victorian England (Yildiran, 1992).

The Concept of Physical Training during the Enlightenment and Its Pioneers Ideas and epitomes produced prior to this period were ultimately practiced in this period. Respected educators of the period stressed the significance of physical training, which appealed only to the aristocracy during the Renaissance; simulating the education system of the Ancient Age, educators started to address everyone from all classes and their views spread concordantly in this period.

John Locke: The Enlightenment reached maturity during the mid-to late eighteenth century. John Locke, a lead player in this new intellectual movement (Faiella, 2006), was likely the most popular political philosopher during the first part of the 18th century, who stressed the necessity of education (Perry et al., 1989). Locke’s “Essay on Human Intellect” is acknowledged as his most prominent and popular work (Russell, 2002). His work, “Notions of Education” stressed the importance of child health, advised children to learn swimming and to maintain their fitness. Moreover, Locke noted that such activities as dance, swordplay and riding were essential for a gentleman (Alpman, 1972) and that education should be infused with game play (Binbaşıoğlu, 1982).

Jean Jacques Rousseau: in his work, Emile, the philosopher from Geneva discussed educational matters in regard to the principles of nature (Russell, 2002). In this work, which he wrote in (1762) Rousseau argued that individuals should learn from nature, human beings or objects (Perry et al., 1989), and expressed his notions concerning the education of children and teenagers (Binbasioglu, 1982). Rousseau held that children should be allowed to develop and learn according to their natural inclinations, but in Emile, this goal was achieved by a tutor who cunningly manipulated his pupil’s responses (Damrosch, 2007). The aforesaid education was termed “Natural education’’ of the public or “education which will create natural human beings’’ (Aytaç, 1980). Emile exercised early in the morning because he needed strength, and because a strong body was the basic requirement for a healthy soul. Running with bare feet, high jumping, and climbing walls and trees, Emile mastered such skills as jogging, swimming, stone throwing, archery and ball games. Rousseau demanded that every school would have a gymnasium or an area for training (Alpman, 1972).

Continued next post. Time to watch the Olympics!

Messages from the Unconscious / Yes, it happens

“There is no way that as a human being, you won’t disturb the Earth.”

I have related in previous posts, how my “mind works” (and everyone’s does, actually) but you have to listen for the products of the unconscious, in order to make them conscious. I enjoy sleep; it’s an active state of rest, refreshment and dreams. Powerful thinking goes on; a type of thinking much older than conscious verbal thought. A direct link to collective memory – evolutionary memory. A vast reservoir that is encoded along with all the myriad instructions that build a human body within a woman’s body – and after birth must be nurtured in order to grow the infant into an adult form. We call the code DNA, but then ignore that the code is useless unless it finds healthy expression as a living creature, which is not an automatic guaranteed outcome.  

Traditional so-called primitive cultures keep the unconscious conduit open; sometimes through initiation rituals and physical breakdown of the conscious / unconscious barrier or by use of psychoactive concoctions or physical stress; through dream imagery interpretation and the activities of shamans, who act as both guides and “librarians” -individuals, who thanks to their personality – brain type, can search the collective memory banks to “correct” whatever ails you or the community. The source of “trouble” is held to be a deviation from paths and patterns worked out by natural processes – often due to intentional human interference.  

If I’m lucky, a phrase or idea may linger from the night’s brain activity: it may become a stimulus for word-based thinking, as if a basin of water had been left to fill overnight, and that on waking a particular phrase allows the stored up potential of unconscious activity to be free to “do work” in the waking world. Geologic processes and events sometimes supply the images for this dynamic relationship between what modern social people believe to be a “good” realm of conscious social word-thought and the “evil” realm of unconscious “trash and sewerage” – a tragic religious-psychiatric condemnation that has been imposed on a healthy system of human sensory experience, visual processing and creativity directed toward a goal of survival and reproduction of our specific “version” of animal life.

Unconscious processing is a powerful legacy of animal evolution that we have relegated to a sewer system, a septic tank, a dark region of monsters, dreadful impulses and dangers.

Myths from many cultures include Hell, the underworld, limbo or an after life in their scheme of things; some describe “that place” as a source of knowledge that is perilous to enter, but worth it for what can be found there. The unconscious experience is “outside time” and therefore seen as a place of reliable prophecy; an attractive lure to those modern humans who desire to manipulate, dominate and control man and nature – hence the relentless and blinding quest for “magic” as the means to “cheat” the Laws of Nature. But it is the unconscious content of the human animal that composes the owner’s manual for “How to Operate and Maintain a Bipedal Ape”.

We can see that during the long the course of the “evolution” of bipedal apes, what we call “unconscious processes” – mainly visual thinking, sensory thinking, acquisition of energy and interaction with the environment, and the task of growing and maintaining an animal body, were simply taken care of by the brain – and still are. Our pejorative use of the words “instinct and instinctual” knowledge and functions as something inferior, which “we” have left behind, is a nonsensical conclusion; an illusion produced  by the supposedly “superior” (and demonstrably less intelligent) “conscious verbal function” that is embraced, cultivated and worshipped by modern humans as a “God”.

Why would I state that the “unconscious” animal brain is more intelligent than the modern verbal function as a guidance system for human survival?

As an Asperger who relies on the unconscious as the “go to” source for patterns, systems, connections, networks and explanations for “how the universe works” it is obvious that nature itself provides the “master templates” for creating and implementing technological invention and innovation. Homo sapiens has “discovered” these templates (Laws of Physics) by means of mathematics, and the nature of these “languages of physical reality” remains a bit mysterious.

The problem arises with the assumption that the manifestation of technical ideas and products as solutions to the painful drudgery of manual labor is believed to confer intelligence of a truly different type: Wisdom – the ability to “forecast” consequences that potentially result from one’s actions, and the ability to modify present action accordingly. This is an almost impossible task for the human brain; it’s why we invent or seek out Big Parental Figures; employ statistical magic and other contrived nonsense, and “divine the future” in archaic religious texts, simultaneously, without distinction to common sense; we supply our own superstitious rules and clumsy structures to compensate for our utter lack of critical foresight and judgement.

Several notions help clarify this predicament.

1. “Nature” has done the work of “foresight” for us: we have access to knowledge stored in “instinct / unconscious content” and in the conscious apprehension of “how the environment works” through trial and error manipulation of real objects and materials and more recently by means of “abstract codes” and computing power by which we believe we can decipher “the magic universe” of human childhood.

That is, foresight is not “located” in seeing the “future” (which doesn’t exist in concrete form ) but by understanding the “eternal present”. These patterns are not mystical, magical or supernatural.

2. The deceptive mirage of “word thinking” goes unrecognized. The lure of being freed from the Laws of Nature is great! Word thinking is not “tied to” actual reality – it’s usefulness and value is in making propositions that owe no allegiance to the limits and boundaries of the “real world”. Word language CAN lead to rapid communication of information and dissemination of  useful concepts, but! There is no guarantee that this “information” is accurate – most ideas are created to provide for the motivation and justification of time and energy being expended in the pursuit of inflicting injury and suffering on other humans, and the control/exploitation of resources, plants,  animals and other life forms. This activity will never produce A Happy Ending. 

In fact, word thinking leads to the illusion of the reality and primacy of a supernatural domain, in which magic is the operating system. Predatory humans give themselves permission to dominate the environment via verbal constructs, whose origin is assigned to, and justified by, this imaginary supernatural realm. Social dominance  “for personal gain and pleasure” does not correspond to the “dominant role” in nature, which comes with great risk and responsibility and heavy consequences for the dominant individual. In humans, the goal in attaining dominance is a “free ride” on the backs of inferior beings. 

3. Oh boy! Screw nature: I’m in control! Bring on the spells, rituals, magic symbols, secret handshakes, rattles and drums; the abject obedience of “lesser beings” to my dictates. This is where social humans are today: technically powerful, abysmally ignorant of the consequences of our actions. We have cut ourselves off from access to the user’s manual that is included free with every brain.

4. Instead, we have created a delusional and self-destructive hatred and fear of a vital evolutionary legacy; unconscious thinking has been selected and slandered by certain predatory humans as the “cause” of pathologic behavior: mental illness, violence, depravity, abuse, “disobedience to social control” and to the “supernatural regime” of human social reality, when in fact, much of human “bad behavior” can be traced directly to the steeply hierarchical structures that dominate modern humans. From the top down (from tyrants, Pharaohs and other psycho-sociopaths, to the ranks of those who are their “prey”) it is the distortion of manmade supernatural “order” as the original and absolute truth of human existence that prevents the healthy growth and sanity of actual human beings. Much behavior that is destructive, abusive, cruel and irrational on the part of Homo sapiens is inevitable, given the abnormal, destructive and “killer” stresses built into modern social environments.

Thoughts on Ancient Males / Life in the flesh

In the ancient world a common greeting among travelers was, “Which gods do you worship?” Deities were compared, traded, and adopted in recognition that strangers had something of value to offer. Along with the accretion of ancestor gods into extensive pantheons, an exchange of earthly ideas and useful articles took place. Pantheons were insurance providers who covered women, children, tradesman, sailors and warriors – no matter how dangerous or risky their occupations; no matter how lowly. Multiple gods meant that everyone had a sympathetic listener, one that might increase a person’s chances for a favorable outcome to life’s ventures, large and small. 

404px-Athena_owl_Met_09_221_43 27784514 Brygos_Painter_lekythos_Athena_holding_spear_MET

A curious female type: The goddess Athena is incomprehensible to modern humans. Here she models the Trojan horse for the Greeks.

A curious female: The goddess Athena is incomprehensible to modern humans; and yet for the ancient Greeks, she was the cornerstone of civilization. Here she models the Trojan horse for the “clever” takedown of Troy.

 

 

 

 In The Iliad

…the gods are manifestations of physical states; the rush of adrenalin, sexual arousal, and rage. For the Homeric male, these are the gods that must be obeyed. There is no power by which a man can override the impulse-to-action of these god forces. The gifts of the notorious killer Achilles originate in the divine sphere, but he is human like his comrades; consumed by self pity and emotionally erratic.

In Ancient Greek culture, consequences accompanied individual gifts. Achilles must choose an average life (adulthood) and obscurity, or death at Troy and an immortal name. Achilles sulks like a boy, but we know that he will submit to his fate, because fate is the body, and no matter how extraordinary that body is, the body must die. Immortality for Homeric Greeks did not mean supernatural avoidance of death. To live forever meant that one’s name and deeds were preserved by the attention and skill of the poet. In Ancient Greek culture it was the artist who had the power to confer immortality.

There was no apology for violence in Homeric time. The work of men was grim adventure. Raids on neighbors and distant places for slave women, for horses and gold, for anything of value, was a man’s occupation. The Iliad is packed with unrelenting gore, and yet we continue to this day to be mesmerized by men who hack each other to death. Mundane questions arise: were these Bronze Age individuals afflicted with post traumatic stress disorder? How could women and children, as well as warriors not be traumatized by a life of episodic brutality? If they were severely damaged mentally and emotionally, how did they create a legacy of poetry, art, science and philosophy? Did these human beings inhabit a mind space that deflected trauma as if it were a rain shower? Was their literal perception of reality a type of protection?

imagesD8PA00S5riace bronze

Women will forever be drawn to the essential physicality of Homeric man. He is the original sexual male; the man whose qualities can be witnessed in the flesh. His body was a true product of nature and habit. Disfiguring scars proved his value in battle. Robust genes may have been his only participation in fatherhood.

Time and culture have produced another type of man, a supernatural creature with no marked talent, one who can offer general, but not specific, loyalty. Domestic man, propertied man, unbearably dull man, emotionally-retarded man. In his company a woman shrivels to her aptitude for patience and endurance, for heating dinner in the microwave and folding laundry. Her fate is a life of starvation.

tumblr_m5pxjtzoMB1r0ttw3o1_1280grey

Noble Penelope reduced to a neurotypical nag.

A Winter of Life Message / Who is Eckart Tolle?

Who is Eckhart Tolle? Eckhart Tolle is a German-born resident of Canada best known as the author of The Power of Now and A New Earth: Awakening to your Life’s Purpose. In 2008, a New York Times writer called Tolle “the most popular spiritual author in the United States”.   Wikipedia

I don’t know of this person: He sounds a bit “New Age-y” Lots of pithy quotes all over the internet. He’s just about my age, so that may explain why this statement  “resonates” at this point in my life, when the body we count on is well on its way to  breaking down and lurching toward the inevitable. I think the quote is wasted on young people. An act of surrender and bravery is necessary to embrace it, an act that takes a lifetime to acknowledge.

He could have said this one thing and nothing else. It really sums up what life is about. The stupid defiance of “what is” – a constant uphill trudge, battle, struggle to “become” someone -a viable, admirable sprig of life-force that makes its mark – whatever that is. In nature, all this seems automatic: mathematical, chemical, electrical life becoming, evolving – terrible in its ruthless paring down of species into improbably successful and beautiful forms – temporary, all of them. And then there is “us”.

Hell bent on defying nature: swimming upstream, spewing toxins, garbage, waste from our pretty technically savvy vehicles. Congratulating ourselves on having peanut butter in jars, mechanical eyelash curlers, fake fur garments, a gluttonous desire for pizza, remote controls for refrigerators, garage doors and the ability to spy on our children, our dogs, cats, parakeets and snakes; on our front porch deliveries, on road conditions in Zanzibar or the price of sandals in Morocco. And we’re promised / warned that there’s much more of this to come… It’s lovely and cute in a way… giving the finger to nature.

So, resistance is futile, says Mr. Tolle. But without forces to resist, would humans be human? No. But in old age it’s okay to recognize futility; to embrace the lessening need to resist anything.

This is absolutely true if you live in Wyoming…

 

 

 

 

Swearing / A Natural Painkiller and Emotion Regulator

 Asperger types may have difficulty understanding “cursing or swearing” as a social phenomenon. My “take” is that swearing originates as a physiological function; an expressive “sound” response to pain, frustration or failure. It’s SOCIAL use is obviously grounded in MAGIC: the belief that words have the power to do HARM – literally, to bring a strongly felt emotional aggression to fruition. This can be seen in the association of swearing with religion; religion is ritualized magic, offering both positive and negative power to initiates through the priesthood or elect. Curse words are not “taboo” because of social rejection, but because these words and spells are believed to have active and dangerous power, which is reserved for the “magician-priest” class alone. They are a key step in the formation of a social hierarchy. The designation of which persons may or may not use “swear words”  demonstrates segregation of power in the social hierarchy. 

Females in JudeoChristian culture have always been suspect of having magical power over men, to the extreme – the manifestation being paranoia in males. Therefore, “swearing and cursing” have traditionally been taboo for “ladies”. This denies females the “soothing power of a good expletive” and is an excuse for all male groups, professions and other power organizations to exclude women as employees and especially as bosses. (Sexual predation being the number one tactic) This exclusion is both fear-based (women have magic power over men – sex) and a social constraint that functions to keep women low on the social pyramid – dependent, childlike and economically disadvantaged. 

Swear by it: why bad language is good for you

It bonds workers, sheds light on the brain and pacifies us.

theguardian.com/lifeandstyle/2017/nov/12

Emma Byrne on the uses and paradoxes of swearing:

When I was about nine years old, I was smacked for calling my little brother a “twat”.

I had no idea what a twat was – I thought it was just a silly way of saying “twit” – but that smack taught me that some words were more powerful than others and that I had to be careful how I used them.

Except that experience didn’t exactly cure me of swearing. In fact, it probably went some way towards piquing my fascination with it. Since then I’ve had a certain pride in my knack for colourful and well-timed profanity: being a woman in a male-dominated field, I rely on it to camouflage myself as one of the guys.

But what is swearing and why is it special? Is it the way that it sounds? Or the way that it feels when we say it? Thanks to a range of scientists, from Victorian surgeons to modern neuroscientists, we know a lot more about swearing than we used to.

For example, I’m definitely not the only person who uses swearing as a way of fitting in at work. On the contrary, research shows that swearing can help build teams in the workplace. From the factory floor to the operating theatre, scientists have shown that teams who share a vulgar lexicon tend to work more effectively together, feel closer and be more productive than those who don’t.

Swearing has also helped to develop the field of neuroscience because of its function as a barometer of our emotions. It has been used as a research tool for more than 150 years, helping us to understand the structure of the human brain, such as the role of the amygdala in the regulation of emotions.

Swearing has taught us a great deal about our minds, too. We know that people who learn a second language often find it less stressful to swear in their adopted tongue, which gives us an idea of the childhood developmental stages at which we learn emotions and taboos. Swearing also makes the heart beat faster and primes us to think aggressive thoughts while, paradoxically, making us less likely to be physically violent.

And swearing is a surprisingly flexible part of our linguistic repertoire. It reinvents itself from generation to generation as taboos shift. Profanity has even become part of the way we express positive feelings – we know that football fans use “fuck” just as frequently when they’re happy as when they are angry or frustrated.

That last finding is one of my own. With colleagues at City University, London, I’ve studied thousands of football fans and their bad language during big games. It’s no great surprise that football fans swear, but it isn’t anywhere near as aggressive as you might think – fans on Twitter almost never swear about their opponents and reserve their outbursts for players on their own team.

In researching and writing about swearing I’m not attempting to justify rudeness and aggression. Not at all. I certainly wouldn’t want profanities to become commonplace: swearing needs to maintain its emotional impact to be effective. We only need to look at the way it has changed over the past hundred years to see that, as some swear words become mild and ineffectual through overuse or shifting cultural values, we reach for other taboos to fill the gap.

That doesn’t mean swearing is always used as a vehicle for aggression or insult. Study after study has shown that swearing is as likely to be used in frustration with oneself, or in solidarity, or to amuse someone else. Either way, it is a complex social signal that is laden with emotional and cultural significance.

_____________________________________________________

A review of:

Swearing: A Cross-Cultural Linguistic Study.

Magnus Ljung (2011) / Houndmills, Basingstoke: Palgrave Macmillan. Pp. 240 ISBN: 9780230576315 (Hardback)

Affiliation Birkbeck College University of London, England

SOLS VOL 8.1 2014 183–187 © 2014, EQUINOX PUBLISHING

Reviewed by Nooshin Shakiba

This book studies the forms, uses, and actual instances of swearing in English and twenty-four other languages of the Germanic, Romance, Slavic, and Finno-Ugric language families, among others. The study mainly draws upon the results of the application of a questionnaire used to interview native speakers. From a sociolinguistic perspective, swearing is seen as a type of linguistic behaviour that society regards as disrespectful, vulgar, and even offensive. It is a sociolinguistic phenomenon worthy of investigation because of its social regulatory function.

The volume under review begins with the definition and classification of swearing. To the benefit of those interested in diachronic studies on the topic, the history of swearing is covered subsequently. The two following chapters focus on forms of swearing that can be used as independent utterances. The remaining chapters deal with swear words that, in spite of their independent character, are used as parts of larger units. In addition, this book highlights the (socio)linguistic characteristics of swearing, featuring various examples from past and contemporary researchers. The author analyses the data from his own research throughout the whole book but also uses the one million word British National Corpus (BNC).

In the first chapter, ‘Defining Swearing’, the author identifies four criteria common to all instances of swearing. First, swearing is the use of utterances that contain taboo words. The use of taboo words in swearing adds emphasis to the message the speaker wishes to convey. At the same time, swearing frequently violates cultural rules. Second, while the literal meaning of these taboo words is indeed used in swearing, they do not carry much weight. Third, due to lexical, phrasal, and syntactic constraints, swearing is considered a type of formulaic language. Finally, swearing constitutes an instance of reflective language use that reveals the speaker’s attitudes and feelings. In addition to these criteria, the author notes in this chapter that some types of swearing have entered into societies and languages where they have never been used before as a result of an increase in immigration. The author explains how a taboo word’s degree of offensiveness is not related to the perceived strength of the taboo ‒ which eventually changes over time. Even materials prohibited during daytime tend to be admitted for broadcasting beyond the restricted hours.

In addition, taboo terms cannot be replaced with their literal synonyms in the context of swearing in spite of the fact that they display interchangeability with other words in that specific context. For instance, we cannot say ‘Shag you!’ instead of ‘Fuck you!’. However, ‘Screw you!’ can be used to carry the same meaning. This indicates that swear words present a specific synonymy which is particular to them. Swearing is formulaic as the meaning of the entire sequence cannot be understood from the words it contains, nor from its grammatical configuration. This feature is at times considered a case of grammaticalization, which is accompanied by desemanticization. Desemanticization, the loss of meaning, is very common in swear words. As an emotive language genre, swearing is primarily used to communicate the speaker’s attitude. However, the listener will also form their own interpretation of the utterance on the basis of the available linguistic and non-linguistic information. Ultimately, the speaker cannot be certain of the exact impact any use of swearing will have. This may lead to severe consequences or penalties.

In Chapter 2, Ljung elaborates on the subcategories of swearing. He uses the distinction between function and theme as the main aspects of the taxonomy provided in his study. The term ‘function’ refers to the uses of swearing, while ‘theme’ refers to the areas of taboo language from which the swearer draws his or her swear words. The pertinent functions can be divided into the three categories of stand-alones, slot fillers, and replacive swearing, each of which has its own subdivisions. This chapter also lists five major – as well as some minor – themes from which most languages draw their swearing vocabulary. The first major theme is religion. In Christian cultures, there is a distinction between celestial and diabolic swearing, but among Muslims, diabolic themes apparently do not occur. The second and very popular theme is scatological. The third one is about sex organs. Using taboo words for the female sex organ was the most popular among all the languages studied by Ljung. The fourth theme revolves around sexual activities.

In some Germanic languages, such as German and Swedish, speakers never use their taboo words for sexual intercourse in swearing. The final theme is about the mother, which is very widespread. Indeed, it can be subsumed under the category of ‘ritual insults’. Except for English, the Germanic languages do not use this theme in swearing. Moreover, the mother theme’s abbreviated format, e.g., English ‘Your mother!’, is found in many languages. Among minor themes of swearing, ancestors play a crucial role in several cultures. Animals, disease, and prostitution are not uncommon. Death plays a significant role in all cultures, and some languages prefer euphemistic terms for discussing that subject.

Chapter 3 deals with the ‘History of Swearing’. It explains the first recorded instances of swearing, and all the social, cultural, and global impacts of the use of swearing up to the twentieth century. The first two recorded cases of swearing come from Ancient Egypt. Since the very beginning, swearing shows traces of self-cursing. Swearing performed by Zeus or Hercules was totally acceptable in classical Greek and Latin. Therefore, swearing focused on the use of the names of gods and bad language was not present in their swearing. This does not mean that classical Latin had no ‘bad words’, but that ‘swearing was not part of the linguistic repertory’ (p. 51). In addition, gender-based differences were apparent among the Romans.

Uttering a swear word in public in medieval times could lead to the death penalty. Swear words were hence used in oral interactions for hundreds of years before ever being recorded in written language: people did not dare to use them in writing. Despite such severe punishments and the rise of the power of the Church during the Middle Ages, the use of swearing was not eliminated nor reduced. In fact, swearing increased. It became very common among all social classes regardless of their gender or age. Moreover, the use of swearing became an art form, since it could convey a well-designed linguistic ‘product’ and be used in a very sophisticated form.

In Great Britain, swearing reached a high point during the eighteenth century, but in the following century respected members of society ceased using such language. Swearing remains the most popular way to express anger among soldiers and sailors of any rank. In the twentieth century, swearing in general and the use of four-letter words in particular almost resulted in the same mode of speaking. ‘Fuck’ has been used since the seventeenth century, and compared to other four letter words, its use is quite recent. However, this does not mean that other types of swearing have diminished in use. Scatological swearing, which is used in all languages, showed the highest usage of all types of swearing in this study.

Chapter 4 focuses on ‘Expletive Interjections’, i.e., how swearing, in many languages, contains expletives for exclamations of pain, surprise, or annoyance. Ljung formulates the hypothesis that any utterance can be an exclamation; nevertheless, what matters is the delivery. In fact, the delivery carries the representation of the speaker’s state of mind, while the syntax or other features of the utterance are of lesser importance.

Ljung’s study of expletive interjections in the BNC shows that the majority of expletive interjections are religious in nature, such as ‘Oh God’ and ‘Hell’. Expletive interjections may be used in two different ways. First, there are reactive interjections – often thought to be the most frequent ones – which indicate the speaker’s involuntary reaction to stimuli, as in exclamations of surprise, annoyance, or pain. By contrast, pragmatic interjections fulfill the communicative functions of subjectivity, interactivity, and textuality. These three functions are strongly related to the category of pragmatic markers, and their use exceeded that of reactive interjections in Ljung’s study (2009). It is evident that the same interjection can carry different meanings on different occasions. Furthermore, the majority of pragmatic interjections were used as slot fillers, particularly before clauses.

Chapter 5 discusses ‘Oaths, Emphatic Denial, and Curses’. Informal oaths and curses are the two oldest forms of swearing. Present-day English speakers have fewer choices as far as oaths are concerned and show a lack of creativity in their oaths compared to speakers from the Middle Ages. However, there are several languages, including Arabic, in which oaths are alive and unaffected by the interjectionalization and grammaticalization that have affected oaths in the languages spoken in Western-derived cultures. In addition, emphatic denial is found in many languages. This type of swearing uses emphatic utterances to deny statements, a usage similar to oaths. It is particularly used for denying the truth of a subsequent utterance, as in the phrase ‘The hell it is!’ In emphatic denial swearing, scatological and religious themes are most common.

Chapter 6 specifically addresses three types of swearing: ritual insults, name calling, and unfriendly suggestions. With few exceptions, infernal powers, worldly powers, and summons of heaven do not appear in these types of swearing as they do in curses. Instead, the types of swearing covered in this chapter use more common taboo themes like sex, mothers, masturbation, animals, and disease. The most popular theme in ritual insults is the mother theme. This theme is less related to languages than to cultures. In other words, two languages belonging to the same language family, such as the Finno-Ugric languages Finnish and Hungarian, do not treat the mother theme in the same way. However, due to immigration, linguistic and cultural boundaries sometimes get blurred. Some swear words that were entirely absent in particular languages or cultures have begun to surface in them due to the impact of linguistic and/or cultural contacts.

In Chapter 7, ‘Degree, Dislike, Emphasis, Exasperation, and Annoyance’, Ljung introduces swear words that are used inside larger units, e.g., as slot fillers. Since the focus is on swear words only, these are called ‘expletive slot fillers’ which express the speaker’s state of mind. It is important to keep in mind that in spite of the tendency to categorise swear words, individual opinions on swearing, religious beliefs, and appropriate behaviour differ a lot. Ljung states that all the languages in his study use expletive slot fillers to indicate emphasis and dislike. He also indicates that in certain languages, such as Arabic, some ways of expressing dislike are absent. In addition, there are languages featuring different linguistic typologies and cultures that still use the same means of expressing dislike and intensification. Cross-linguistic comparisons of swearing constitute a fertile area for research into emotive language. Therefore, for those interested in studying swear words and emotive language, it would be worthwhile to extend the comparison to other languages not covered by Ljung’s study.

Chapter 8 focuses only on ‘Replacive Swearing’. In the previous chapters the author mentioned that it can be hard to determine the category to which a specific swear word belongs, or which criteria might apply for classifying that item as a swear word. However, this issue becomes even more difficult in the case of languages that assign more than one literal meaning to a specific swear word. In fact, understanding the illocutionary force of a swear word depends on linguistic and situational factors as well as the context of the utterance. Ljung elucidates a very interesting structure for creating new vocabulary in Russian which makes the Russian swearing lexicon quite impressive. Ljung’s findings indicate that there are significant similarities between the swear word systems of languages, regardless of their cultural and linguistic differences.

The distinct chapters of this book can be used as teaching material for various courses, including courses on sociolinguistics and historical linguistics. Scholars interested in such topics as comparative linguistics or multilingualism can benefit from reading the analysis presented in this book on the languages used in Ljung’s study. At the same time, the volume is a valuable resource for graduate students and researchers. Each chapter provides readers with rich information about pertinent studies as well as sufficient examples. In addition, Ljung’s own findings provide in-depth analyses of the proposed topics of each chapter. Since the author covers twenty-five languages in his study, his findings constitute a significant resource in their own right.

PDF downloads of the text are available online. 

 

 

 

 

Back to Basics / Positive and Negative Liberty

Stanford Encyclopedia of Philosophy 

I’m highlighting and commenting as I read this, based on my experiences as an American citizen for 50+ (conscious) years, and from the POV of a lifelong, born as, Asperger human. 

https://plato.stanford.edu/entries/liberty-positive-negative/

First published Thu Feb 27, 2003; substantive revision Tue Aug 2, 2016

Negative liberty is the absence of obstacles, barriers or constraints. One has negative liberty to the extent that actions are available to one in this negative sense. Positive liberty is the possibility of acting — or the fact of acting — in such a way as to take control of one’s life and realize one’s fundamental purposes. While negative liberty is usually attributed to individual agents, positive liberty is sometimes attributed to collectivities, or to individuals considered primarily as members of given collectivities.

The idea of distinguishing between a negative and a positive sense of the term ‘liberty’ goes back at least to Kant, and was examined and defended in depth by Isaiah Berlin in the 1950s and ’60s. Discussions about positive and negative liberty normally take place within the context of political and social philosophy. They are distinct from, though sometimes related to, philosophical discussions about free will. (In my opinion,  a dead dodo term with only superstition and western illusion to support it) Work on the nature of positive liberty often overlaps, however, with work on the nature of autonomy.

As Berlin showed, negative and positive liberty are not merely two distinct kinds of liberty; they can be seen as rival, incompatible interpretations of a single political ideal. (Why not as mutually informative in a discussion about freedom?) Since few people claim to be against liberty, the way this term is interpreted and defined can have important political implications. Political liberalism tends to presuppose a negative definition of liberty: liberals generally claim that if one favors individual liberty one should place strong limitations on the activities of the state. Critics of liberalism often contest this implication by contesting the negative definition of liberty: they argue that the pursuit of liberty understood as self-realization or as self-determination (whether of the individual or of the collectivity) can require state intervention of a kind not normally allowed by liberals. (Hmmm…. in the U.S., these two would appear to be reversed, with Republicans (conservatives) going for strong limitations on the state, and Democrats (liberals) favoring strong limitations on human behavior by the state. Perhaps this reversal exists because liberals in the U.S. are the present day perpetrators of Puritanism? 

Many authors prefer to talk of positive and negative freedom. This is only a difference of style, and the terms ‘liberty’ and ‘freedom’ are normally used interchangeably by political and social philosophers. Although some attempts have been made to distinguish between liberty and freedom (Pitkin 1988; Williams 2001; Dworkin 2011), generally speaking these have not caught on. Neither can they be translated into other European languages, which contain only the one term, of either Latin or Germanic origin (e.g. liberté, Freiheit), where English contains both.

1. Two Concepts of Liberty

Imagine you are driving a car through town, and you come to a fork in the road. You turn left, but no one was forcing you to go one way or the other. Next you come to a crossroads. You turn right, but no one was preventing you from going left or straight on. There is no traffic to speak of and there are no diversions or police roadblocks. So you seem, as a driver, to be completely free. But this picture of your situation might change quite dramatically if we consider that the reason you went left and then right is that you’re addicted to cigarettes and you’re desperate to get to the tobacconists before it closes. Rather than driving, you feel you are being driven, as your urge to smoke leads you uncontrollably to turn the wheel first to the left and then to the right. Moreover, you’re perfectly aware that your turning right at the crossroads means you’ll probably miss a train that was to take you to an appointment you care about very much. You long to be free of this irrational desire that is not only threatening your longevity but is also stopping you right now from doing what you think you ought to be doing. (Nice concise description of the human condition in Western culture. This problem can only exist in cultures which acknowledge the individual.) 

This story gives us two contrasting ways of thinking of liberty. On the one hand, one can think of liberty as the absence of obstacles external to the agent. You are free if no one is stopping you from doing whatever you might want to do. In the above story you appear, in this sense, to be free. On the other hand, one can think of liberty as the presence of control on the part of the agent. To be free, you must be self-determined, which is to say that you must be able to control your own destiny in your own interests. In the above story you appear, in this sense, to be unfree: you are not in control of your own destiny, as you are failing to control a passion that you yourself would rather be rid of and which is preventing you from realizing what you recognize to be your true interests. One might say that while on the first view liberty is simply about how many doors are open to the agent, on the second view it is more about going through the right doors for the right reasons. (I don’t see these as being exclusive to each other at all:  the one must take sides position is neurotypical, not Asperger. An Asperger can be aware of a proposed distinction, without having to jump on one horse and ride it into the swamp of social typical insanity. These concepts are useful tools with which to analyze a situation, a pattern or a system. They are not universals, absolutes or ideas that demand loyalty.)

In a famous essay first published in 1958, Isaiah Berlin called these two concepts of liberty negative and positive respectively (Berlin 1969).[1] The reason for using these labels is that in the first case liberty seems to be a mere absence of something (i.e. of obstacles, barriers, constraints or interference from others), whereas in the second case it seems to require the presence of something (i.e. of control, self-mastery, self-determination or self-realization). In Berlin’s words, we use the negative concept of liberty in attempting to answer the question “What is the area within which the subject — a person or group of persons — is or should be left to do or be what he is able to do or be, without interference by other persons?”, whereas we use the positive concept in attempting to answer the question “What, or who, is the source of control or interference that can determine someone to do, or be, this rather than that?” (1969, pp. 121–22). (These are not actionable ideas, because they propose an unachievable separation of “thought” from real environments. These are “oughts and shoulds” that deny the facts of human social existence. Social systems control human behavior. That is, the answers to these questions were decided long ago by DEAD PEOPLE; not people living today. Without this realization, social structures appear to neurotypicals to be part of the fabric of space-time, not scientific space-time, but supernatural space-time!)  

It is useful to think of the difference between the two concepts in terms of the difference between factors that are external and factors that are internal to the agent. (Yes, this is a useful tool, but in action, these internal and external factors are not exclusive) While theorists of negative freedom are primarily interested in the degree to which individuals or groups suffer interference from external bodies, theorists of positive freedom are more attentive to the internal factors affecting the degree to which individuals or groups act autonomously. Given this difference, one might be tempted to think that a political philosopher should concentrate exclusively on negative freedom, a concern with positive freedom being more relevant to psychology or individual morality than to political and social institutions. (Here we go; the neurotypical universe of “chopped salad”) This, however, would be premature, for among the most hotly debated issues in political philosophy are the following: Is the positive concept of freedom a political concept? Can individuals or groups achieve positive freedom through political action? Is it possible for the state to promote the positive freedom of citizens on their behalf? And if so, is it desirable for the state to do so? The classic texts in the history of western political thought are divided over how these questions should be answered: theorists in the classical liberal tradition, like Constant, Humboldt, Spencer and Mill, are typically classed as answering ‘no’ and therefore as defending a negative concept of political freedom; theorists that are critical of this tradition, like Rousseau, Hegel, Marx and T.H. Green, are typically classed as answering ‘yes’ and as defending a positive concept of political freedom.

Above we have a concise description of the “values” situation in traditional Asperger temperament vs. social repression of one’s native or instinctive concept of What it means to be human. This goes far deeper than retraining Asperger children to mimic social “niceties” in order not to be rejected from the group. 

In its political form, positive freedom has often been thought of as necessarily achieved through a collectivity. Perhaps the clearest case is that of Rousseau’s theory of freedom, according to which individual freedom is achieved through participation in the process whereby one’s community exercises collective control over its own affairs in accordance with the ‘general will’. Put in the simplest terms, one might say that a democratic society is a free society because it is a self-determined society, and that a member of that society is free to the extent that he or she participates in its democratic process. (One might say this, but social typicals are full of blah, blah, blah that is “pie in the sky” – blind to reality) But there are also individualist applications of the concept of positive freedom. For example, it is sometimes said that a government should aim actively to create the conditions necessary for individuals to be self-sufficient or to achieve self-realization. The welfare state has sometimes been defended on this basis, as has the idea of a universal basic income. (I have nothing against a support system for providing decent distribution of resources to those who cannot “fend for themselves” – but the welfare system – in the U.S., at least – is not this: it is a system for controlling who gets access to the upper levels of the social pyramid, and who remains trapped at the bottom.) The negative concept of freedom, on the other hand, is most commonly assumed in liberal defences of the constitutional liberties typical of liberal-democratic societies, such as freedom of movement, freedom of religion, and freedom of speech, and in arguments against paternalist or moralist state intervention. (More supernatural blah, blah, blah that has nothing to do with the reality of a severe social inequality disguised as democracy. I’m sure that the neglected and persecuted minorities “voted for” their own oppression!) It is also often invoked in defences of the right to private property. This said, some philosophers have contested the claim that private property necessarily enhances negative liberty (Cohen 1991, 1995), and still others have tried to show that negative liberty can ground a form of egalitarianism (Steiner 1994). (Neurotypical – either-or, black and white, non-negotiable “supernatural” absolutism. Pick a side…)

After Berlin, the most widely cited and best developed analyses of the negative concept of liberty include Hayek (1960), Day (1971), Oppenheim (1981), Miller (1983) and Steiner (1994). Among the most prominent contemporary analyses of the positive concept of liberty are Milne (1968), Gibbs (1976), C. Taylor (1979) and Christman (1991, 2005).

2. The Paradox of Positive Liberty

Many liberals, including Berlin, have suggested that the positive concept of liberty carries with it a danger of authoritarianism. Consider the fate of a permanent and oppressed minority. Because the members of this minority participate in a democratic process characterized by majority rule, they might be said to be free on the grounds that they are members of a society exercising self-control over its own affairs. But they are oppressed, and so are surely unfree. (Democracy as a sham) Moreover, it is not necessary to see a society as democratic in order to see it as self-controlled; one might instead adopt an organic conception of society, according to which the collectivity is to be thought of as a living organism, and one might believe that this organism will only act rationally, will only be in control of itself, when its various parts are brought into line with some rational plan devised by its wise governors (who, to extend the metaphor, might be thought of as the organism’s brain). In this case, even the majority might be oppressed in the name of liberty. (The preposterous notion that all humans can be forced to be “perfect” someday apply psychological diagnosis and treatment, social engineering, pharmacology, genetic fixes –  Until then, despots must rule, by default.)  

Such justifications of oppression in the name of liberty are no mere products of the liberal imagination, for there are notorious historical examples of their endorsement by authoritarian political leaders. (This continuing “charade” of Liberals believe this, Conservatives believe that! ALL leaders are authoritarian; their goal is control of the social pyramid.) Berlin, himself a liberal and writing during the cold war, was clearly moved by the way in which the apparently noble ideal of freedom as self-mastery or self-realization had been twisted and distorted by the totalitarian dictators of the twentieth century — most notably those of the Soviet Union — so as to claim that they, rather than the liberal West, were the true champions of freedom. The slippery slope towards this paradoxical conclusion begins, according to Berlin, with the idea of a divided self. To illustrate: the smoker in our story provides a clear example of a divided self, for she is both a self that desires to get to an appointment and a self that desires to get to the tobacconists, and these two desires are in conflict. We can now enrich this story in a plausible way by adding that one of these selves — the keeper of appointments — is superior to the other: the self that is a keeper of appointments is thus a ‘higher’ self, and the self that is a smoker is a ‘lower’ self. The higher self is the rational, reflecting self, the self that is capable of moral action and of taking responsibility for what she does. This is the true self, for rational reflection and moral responsibility are the features of humans that mark them off from other animals. (This “higher self” is the socially-invented imaginary “western” human, who is not rational or moral at all, but entirely self-serving; a person who is indoctrinated with the concept that obedience to social prescriptions is a rational decision, but which  is actually an archaic irrational religious mandate: the myth of “higher self vs. lower self” is merely a continuation of Old Testament original sin (our animal nature) vs. “obedient, conforming, self-hating humans” who are slaves to a social hierarchy: this propaganda works for any system, whatever we choose to label it. Note: The Reformation did not change this: Henry VIII, the “father” of a rebellious protestant regime was a serial rapist and murderer beyond the aspirations of misogynist criminal heroes of Biblical fame) The lower self, on the other hand, is the self of the passions, of unreflecting desires and irrational impulses. One is free, then, when one’s higher, rational self is in control and one is not a slave to one’s passions or to one’s merely empirical self. The next step down the slippery slope consists in pointing out that some individuals are more rational than others, and can therefore know best what is in their and others’ rational interests. (Western psychology feeds on this myth) This allows them to say that by forcing people less rational than themselves to do the rational thing and thus to realize their true selves, they are in fact liberating them from their merely empirical desires. Occasionally, Berlin says, the defender of positive freedom will take an additional step that consists in conceiving of the self as wider than the individual and as represented by an organic social whole — “a tribe, a race, a church, a state, the great society of the living and the dead and the yet unborn”. The true interests of the individual are to be identified with the interests of this whole, and individuals can and should be coerced into fulfilling these interests, for they would not resist coercion if they were as rational and wise as their coercers. “Once I take this view”, Berlin says, “I am in a position to ignore the actual wishes of men or societies, to bully, oppress, torture in the name, and on behalf, of their ‘real’ selves, in the secure knowledge that whatever is the true goal of man … must be identical with his freedom” (Berlin 1969, pp. 132–33).

The contention that there is a rational or moral distinction between a “pure and democratic United States” and any nation, political system, or culture that WE designate as inferior to us, is outrageous. The U.S. acts on purely supernatural and predatory religious prejudice, acted on (in the Puritan way) as “he who has the most money has God’s approval to be the Chosen Tyrant”. 

Those in the negative camp try to cut off this line of reasoning at the first step, by denying that there is any necessary relation between one’s freedom and one’s desires. Since one is free to the extent that one is externally unprevented from doing things, they say, one can be free to do what one does not desire to do. If being free meant being unprevented from realizing one’s desires, then one could, again paradoxically, reduce one’s unfreedom by coming to desire fewer of the things one is unfree to do. One could become free simply by contenting oneself with one’s situation. (This is a viable option, which millions of people act on, whether by necessity or preference for simplicity) A perfectly contented slave is perfectly free to realize all of her desires. (Wow! Nonsense) Nevertheless, we tend to think of slavery as the opposite of freedom. More generally, freedom is not to be confused with happiness, for in logical terms there is nothing to stop a free person from being unhappy or an unfree person from being happy. The happy person might feel free, but whether they are free is another matter (Day, 1970). Negative theorists of freedom therefore tend to say not that having freedom means being unprevented from doing as one desires, but that it means being unprevented from doing whatever one might desire to do (Steiner 1994. Cf. Van Parijs 1995; Sugden 2006). (More neurotypical nonsense, since no definition of an actual state of freedom exists; no possible state labeled freedom has been proven to exist, except as an “abstract feeling”)

Some theorists of positive freedom bite the bullet and say that the contented slave is indeed free — that in order to be free the individual must learn, not so much to dominate certain merely empirical desires, but to rid herself of them. She must, in other words, remove as many of her desires as possible. As Berlin puts it, if I have a wounded leg ‘there are two methods of freeing myself from pain. One is to heal the wound. But if the cure is too difficult or uncertain, there is another method. I can get rid of the wound by cutting off my leg’ (1969, pp. 135–36). This is the strategy of liberation adopted by ascetics, stoics and Buddhist sages. It involves a ‘retreat into an inner citadel’ — a soul or a purely noumenal self — in which the individual is immune to any outside forces. (I think this is a western misinterpretation of a response to these unavoidable forces, which does not claim immunity!) But this state, even if it can be achieved, is not one that liberals would want to call one of freedom, for it again risks masking important forms of oppression. It is, after all, often in coming to terms with excessive external limitations in society that individuals retreat into themselves, pretending to themselves that they do not really desire the worldly goods or pleasures they have been denied. Moreover, the removal of desires may also be an effect of outside forces, such as brainwashing, which we should hardly want to call a realization of freedom.

In the U.S., brainwashing takes the form of Consumer Capitalism, marketing and advertising and political impotence: any and all needs and desires that are natural and necessary to a proper, happy animal life, are denied and replaced by cheap novelties, infantile distractions and the purchase of status objects, over and above the acquisition of food, shelter and meaningful relationships. The result is an epidemic of pathology and self-destruction. 

Because the concept of negative freedom concentrates on the external sphere in which individuals interact, it seems to provide a better guarantee against the dangers of paternalism and authoritarianism perceived by Berlin. To promote negative freedom is to promote the existence of a sphere of action within which the individual is sovereign, and within which she can pursue her own projects subject only to the constraint that she respect the spheres of others. Humboldt and Mill, both advocates of negative freedom, compared the development of an individual to that of a plant: individuals, like plants, must be allowed to grow, in the sense of developing their own faculties to the full and according to their own inner logic. Personal growth is something that cannot be imposed from without, but must come from within the individual. (What crap! It is exactly these individual propensities that the social system is designed to quash without mercy)

3. Two Attempts to Create a Third Way

Critics, however, have objected that the ideal described by Humboldt and Mill looks much more like a positive concept of liberty than a negative one. Positive liberty consists, they say, in exactly this growth of the individual: the free individual is one that develops, determines and changes her own desires and interests autonomously and from within. This is not liberty as the mere absence of obstacles, but liberty as autonomy or self-realization. Why should the mere absence of state interference be thought to guarantee such growth? Is there not some third way between the extremes of totalitarianism and the minimal state of the classical liberals — some non-paternalist, non-authoritarian means by which positive liberty in the above sense can be actively promoted?

Blah, blah, blah! Neurotypicals pretend to “think” but in their addiction to “magic word concepts” they are blind to reality: this Asperger would say, that only by understanding the actual manifestations of social reality, (which are anti-individual, anti-liberty, anti-self-actualization, anti-moral, anti-ethical, anti-nature, anti-happiness) can the individual find a workable strategy to cope with the human landscape – and preserve some measure of integrity.

(Which one must accept as a rational being, may not be possible!)

Ouch! I’ve given myself a headache! Time for some R&R…. continued, next post.