Light Skin and Lactose / Recent Adaptations to Cereal Diet

IFL Science

Why Do Europeans Have White Skin?

April 6, 2015 | by Stephen Luntz (shortened to get to the point)

The 1000 Genomes Project is comparing the genomes of modern individuals from specific regions in Europe with 83 samples taken from seven ancient European cultures. Harvard University’s Dr. Iain Mathieson has identified five features which  spread through Europe, indicating a strong selection advantage.

At the annual conference of the American Association of Physical Anthropologists, Mathieson said his team distinguished, “between traits that have changed consistently with population turnovers, traits that have changed apparently neutrally, and traits that have changed dramatically due to recent natural selection.”

… most people of European descent are lactose tolerant, to the extent that milk products not only form a major source of nutrition but are a defining feature of European cultures…that the capacity to digest lactose as an adult appeared in the population after the development of farming. Two waves of farmers settled Europe 7,800 and 4,800 years ago, but it was only 500 years later that the gene for lactose tolerance became widespread.

…hunter-gatherers in what is now Spain, Luxumberg and Hungary had dark-skinned versions of the two genes more strongly associated with skin color. The oldest pale versions of the SLC24A5 and SLC45A2 genes that Mathieson found were at Motala in southern Sweden 7,700 years ago. The gene associated with blue eyes and blond hair was found in bodies from the same site. H/T ScienceMag.

world-solar-energy-map

____________________________________________________________________________________________

From: Civilization Fanatics Forum

Debunking the theory that lighter skin gradually arose in Europeans nearly 40,000 years ago, new research has revealed that it evolved recently – only 7,000 years ago

People in tropical to subtropical parts of the world manufacture vitamin D in their skin as a result of UV exposure. At northern latitudes, dark skin would have reduced the production of vitamin D. If people weren’t getting much vitamin D in their diet, then selection for pre-existing mutations for lighter skin (less pigment) would “sweep” the farming population.  

New scientific findings show that prehistoric European hunter-gatherers were dark-skinned, but ate vitamin D-rich meat, fish, mushrooms and fruits. With the switch to agriculture, the amount of vitamin D in the diet decreased – and resulted in selection for pale skin among European farmers.

Findings detailed today (Jan. 26, 2014) in the journal Nature, “also hint that light skin evolved not to adjust to the lower-light conditions in Europe compared with Africa, but instead to the new diet that emerged after the agricultural revolution”, said study co-author Carles Lalueza-Fox, a paleogenomics researcher at Pompeu Fabra University in Spain.

The finding implies that for most of their evolutionary history, Europeans were not what people today are known as  ‘Caucasian’, said Guido Barbujani, president of the Associazione Genetica Italiana in Ferrara, Italy, who was not involved in the study.

Kostenki_14

 

 

 
Advertisements

Emergence of “humans” / Berkeley.edu + Comments

slidec8

Simplified socio-cultural guide to identifying male / female.

 

The evolution of Primates – Gender dimorphism /

Top: Orangutan male and female. Middle: Modern social human; all “cases” of allowable bathroom use. Bottom: Idiot’s guide to gender ID; U.S.

 

Low sexual dimorphism in modern social humans? Really? Sexual dimorphism is created culturally in humans, and wow! Gender assignment is all mixed up! In fact, one might observe, that body alteration, decoration, behavior and costume are how Homo sapiens compensates for being a strange hairless ape, born without the elaborate fur, plumage, texture, color and behavioral displays of other species. We “copy” other animals and utilize materials in the environment to socially broadcast our sex and gender  – from the violent hyper male to the “big boob” sex object that is the “ideal” American woman. Some cultures  disguise or blur a person’s sex / gender. Neoteny promotes childlike appearance in males and females – the current trend is toward androgeny.

Any questions about this guy’s gender? 

papua13

Old school “gun”

50%20cent

Below: Modern neotenic “feminized” male – androgeny is the popular goal.

jaejoong-jyj korean

__________________________________________________________________________________________

How bizarre can the “story” of human evolution get?

The following chapter “The Emergence of Humans” is from Berkeley.edu, a site about evolution for students. I confess that to my Asperger type of thinking, this review of evolutionary studies is excruciating: One (dumb) point of view is especially mind-boggling; that chimpanzees are a legitimate focus of “study and research” into ancestral humans and modern human behavior, merely because “they are alive” and eligible for torture in labs’; they don’t have “souls” or “suffer.” And they appeal to neotenic social humans, by scoring high on the “cute” scale.

The apparent inability of researchers to get past this 19th C. world view is stunning; instead of a thorough examination of assumptions across disciplines, we again see “warfare” between disciplines, and the ongoing attempt to assemble a human “dinosaur” from bits and pieces of fossilized thinking. In fact, paleontology has exploded with new ideas since “old” dinosaur reconstructions were discovered to be highly inaccurate. Hint, hint.

FOUND! The last common ancestor of Humans and Chimps.

imagesZYC0W6GI

Berkeley.edu / The emergence of humans

The narratives of human evolution are oft-told and highly contentious. There are major disagreements in the field about whether human evolution is more like a branching tree or a crooked stick, depending partly on how many species one recognizes. Interpretations of almost every new find will be sure to find opposition among other experts. Disputes often center on diet and habitat, and whether a given animal could occasionally walk bipedally or was fully upright. What can we really tell about human evolution from our current understanding of the phylogenetic relations of hominids and the sequence of evolution of their traits?

Hominid evogram

(consistency problem)

To begin with, let’s take a step back. Although the evolution of hominid features is sometimes put in the framework of “apes vs. humans,” the fact is that humans are apes, just as they are primates and mammals. A glance at the evogram shows why. The other apes — chimp, bonobo, gorilla, orangutan, gibbon — would not form a natural, monophyletic group (i.e., a group that includes all the descendants of a common ancestor) — if humans were excluded. Humans share many traits with other apes, and those other “apes” (i.e., non-human apes) don’t have unique features that set them apart from humans. Humans have some features that are uniquely our own, but so do gorillas, chimps, and the rest. Hominid evolution should not be read as a march to human-ness (even if it often appears that way from narratives of human evolution). Students should be aware that there is not a dichotomy between humans and apes. Humans are a kind of ape.

Virtually all systematists and taxonomists agree that we should only give names to monophyletic groups. However, this evogram shows that this guideline is not always followed. For an example, consider Australopithecus. On the evogram you can see a series of forms, from just after Ardipithecus to just before Homo in the branching order, that are all called Australopithecus. (Even Paranthropus is often considered an australopithecine.) But as these taxa appear on the evogram, “Australopithecus” is not a natural group, because it is not monophyletic: some forms, such as A. africanus, are found to be closer to humans than A. afarensis and others. Beyond afarensis, for example, all other Australopithecus and Homo share “enlarged cheek teeth and jaws,” because they have a more recent common ancestor. Eventually, several of these forms will have to have new genus names if we want to name only monophyletic groups. Students should avoid thinking of “australopithecines” as a natural group with uniquely evolved traits that link its members together and set it apart from Homo. Instead they should focus on the pattern of shared traits among these species and the Homo clade, recognizing that each species in this lineage gains more and more features that are shared by Homo.

In popular fiction and movies, the concept of the wild “ape-man” is often that of a tree-living, vine-swinging throwback like Tarzan. However, the pantheon of hominids is much richer than this, as the evogram shows with forms as different as Paranthropus and Ardipithecus shows. For example, imagine going back in time to the common ancestor of humans and chimps (including bonobos). What did that common ancestor look like? In the Origin of Species Darwin noted that the extinct common ancestor of two living forms should not be expected to look like a perfect intermediate between them. Rather, it could look more like one branch or the other branch, or something else entirely.

Found! The last common ancestor of humans and chimps.

Did the common ancestor of humans and chimps conform to the ape-man myth and live in the trees, swinging from vines? To answer this, we have to focus not only on anatomy but on behavior, and we have to do it in a phylogenetic context. Apes such as the gibbon and orangutan, which are more distantly related to humans, are largely arboreal (i.e., tree-living). The more closely related apes such as the gorilla and chimps are relatively terrestrial, although they can still climb trees. The feet of the first hominids have a considerable opposition of the big toe to the others but relatively flat feet, as arboreal apes generally do. But other features of their skeleton, such as the position of the foramen magnum underneath the skull, the vertically shortened and laterally flaring hips, and the larger head of the femur, suggest that they were not just mainly terrestrial but habitually bipedal, unlike their knuckle-walking relatives. Most evidence suggests that the hominid lineage retained some of the anatomical features related to arboreal life and quadrupedal gait even after it had evolved a more terrestrial lifestyle and a bipedal gait. There is no fossil record of these behaviors, but the balance of the available evidence supports the hypothesis that the hominid ancestor was terrestrial and bipedal.

Much discussion in human paleontology surrounds the evolution of a bipedal, upright stance. When and why did this occur? One thing to keep in mind is that “bipedal” and “upright” are not equivalent terms. An animal can be bipedal without having a vertical backbone (think T. rex). It seems clear from the fossil record of hominids that habitual bipedality preceded the evolution of a recurved spine and upright stance. Other changes in the gait, such as how the relatively “splayed” gait of chimps evolved into the gait of humans, who put one foot directly in front of the other, involve studying the hip joint, the femur, and the foot. The famous Laetoli footprints attributed to Australopithecus afarensis are bipedal, but they are still relatively splayed compared to the tracks of living humans. (WOW! they are doing it again despite their own caution: humans did not evolve from chimpanzees!)

Another extremely interesting feature in hominid evolution is the degree of sexual dimorphism (i.e., physical differences between the sexes) in different species. Sexual dimorphism is linked to features of sociality and mate competition in many sorts of animals. To understand the evolution of this feature in humans, which have relatively low sexual dimorphism, we need to consider the other apes, in which sexual dimorphism tends to be moderate to high (with exceptions). 

(Again, culture is utterly ignored: the fact is; women and men “self-morph” according to socio-cultural “genders” into very dimorphic animals)

We don’t have sufficient evidence about Sahelanthropus, Orrorin, and Ardipithecus to understand much about sex differences in these species, but we do know that A. afarensis had relatively high sexual dimorphism: the males were considerably larger than the females. The difference seems to have been less in A. africanus, Paranthropus, and most of the Homo lineage. The evolutionary explanation for A. afarensis‘ dimorphism is not entirely clear. The larger males may have used their size to attract females and/or repel rivals, which would fit with an explanation based on sexual selection. Or the males and females may have been differently sized because they played different roles in their groups, the males hunting and gathering and the females caring for the young. Darwin thought that this differentiation of the sexes may have played a critical role in human evolution, but we simply do not know much about the role of this feature in A. afarensis. Some, all, or none of these functions may have been in play. (Novel-writing again! If we don’t have facts about a subject, why not say so? Speculation becomes dogma in the “magic word syndrome” social mind and people argue over imaginary histories and qualities.  Also – I suspect that once again the writers have “EuroAmerican humans in mind regarding sexual dimorphism: why?

We do know that by the time the animals known as Homo evolved, they could make tools, and their hands were well suited for complex manipulations. These features were eventually accompanied by the reduction of the lower face, particularly the jaws and teeth, the recession of the brow, the enlargement of the brain, the evolution of a more erect posture, and the evolution of a limb more adapted for extended walking and running (along with the loss of arboreally oriented features). The evogram shows the hypothesized order of acquisition of these traits. Yet each of the Homo species was unique in its own way, so human evolution should not be seen as a simple linear progression of improvement toward our own present-day form. (But, we show it that way, anyway!)

More…. Should you need a mind-boggling experience:

https://en.wikibooks.org/wiki/Survey_of_Communication_Study/Chapter_13_-_Gender_Communication

And to clarify all this: 

Male Beards / Covering up a Weak Chin?

The contemporary “love affair” that men are having with their ability to grow facial hair may be a reaction to the feminization (neoteny) of the male face that has been a trend for decades. Ironically, soldiers sent to Iraq and Afghanistan, who grew beards in order to “fit in” with ideals of manhood in those cultures, have encouraged the new “manly man” tradition.

No. Possibly the most unattractive type of beard: The Old Testament, patriarchal, we hate women facial hair.

The most creepy facial hair of all: The long and scraggly Patriarchal Old Testament, ‘we hate women’ beard. This style says, “I don’t know what a woman is, and I don’t want to know.”

______________________________________

I intended to write a post concerning “facial expression & mind reading.” Psychologists have made quite a big deal out of their contention that Asperger people are devoid of the ability to “read” the messages sent human to human via facial expressions and body language, and that this phantom “ability” must be displayed by an individual in order to be classified as “normal” or fully human. Other than the arrogance of this declaration, which to begin with, ignores cultural traditions and differences, one simply cannot get past asking questions about physical aspects that must be addressed in order to support the definition of “human” that has been derived by psychologists.

If facial expressions are necessary to human to human communication, doesn’t extensive facial hair negatively impact this “social ability”?

imagesSV2037JB orange-video-pub-avec-sebastien-chabal-L-1imagesWNFYGU3C

If you go hairy, you had better have the face and body to back it up. A beard does not “hide” a neotenic face. 

How does reading faces apply to earlier generations of males, and the many cultures around the world, that favor or demand that men grow varying amounts of facial hair? Shaving is a product of modern cultures beginning notably with the Egyptians who began removing facial hair and body hair because it harbored dirt and lice.  Other ancient cultures used beard growth as the transition to adult obligations and benefits, including the Greeks. Ancient Germanic males grew both long hair and full beards. The Romans made a ritual of a young male’s first shave, and then favored a clean face. Of course, growing a beard also depends on having hairy ancestors – or does it?

farnese-herculesLysippus greek

Top: Roman Hercules Bottom: Greek Hercules (Lysippus)

Reconstructions of Early Homo sapiens and a Neanderthal contemporary

Reconstructions of Early Homo sapiens and his Neanderthal contemporary

Right: Do we actually know how hairy early Homo species were? It would seem that without evidence, artists settle on a 5-day growth or scruffy short beard. Does a beard cover a “weak” Neanderthal chin?

The image of archaic humans, notably Neanderthals, as hairy and unkempt Cave Men has influenced how we interpret hairiness or hairlessness in Homo sapiens. Hair is extremely important in both favorable and unfavorable ways: hair can be a haven for disease and parasites; we need only look to the large amount of time that apes and monkeys spend grooming each other for lice, time that could be spent looking for food, learning about the environment, and practicing skills.

Growing hair requires energy. Our large human brain requires 20% of the energy that our body generates in order to power that brain. It could be that the growth of the modern brain (beginning with Homo erectus) was intricately tied up in a slow feedback cycle; the brain produces energy saving inventions (fire, tools, clothing, travel to more abundant environments) which means more energy to devote to the brain, which can increase brain connections, which makes increased technical innovation possible, which frees more energy for the brain. So, technology could be seen as part of streamlining the human animal into a energy-conserving species, which in turn improves brain function. In other words, the brain benefits from its own thinking when that thinking becomes a set of “apps” that manipulate the environment and the human body.

Meanwhile, what about facial hair? Personally, I’m thankful that I live in a time when men have the choice to grow, or not to grow.

 

____________________________________________________________________________

imagesKX09DYBA imagesLF2IM31T

 

 

 

 

The Whoa! Whoa! Whoa! Reaction / Neanderthal Myths

The “Whoa! Whoa! Whoa!” reaction is what happens when I read articles written for public consumption that “boil down” science for the “educated public” – those who are genuinely interested in the physical universe, but may or may not  have a science background. One of my favorite examples is how Neanderthals are “created” out of the modern social typical penchant (and temperamental obligation) to write stories (myths) from scant, contradictory or preliminary information.

Claiming that Neanderthals were "dumb" is dumb.

The claim that Neanderthals were “dumb” is dumb. Are these skulls to scale?


Science Shows Why You’re Smarter Than a Neanderthal

Neanderthal brains had more capacity devoted to vision and body control, with less left over for social interactions and complex cognition

By Joseph Stromberg Smithsonian.com March 12, 2013

https://www.smithsonianmag.com/science-nature/science-shows-why-youre-smarter-than-a-neanderthal-1885827/ Full article

COMMENTS: This article hits the Whoa! Stop! barrier before getting past the subhead. “Neanderthal brains had more capacity devoted to vision and body control, with less left over for social interactions and complex cognition.”

  1. This view of the brain as having a “capacity” related to volume, like a closet that can be packed with X amount of clothing and Y amount of shoes, and if you want to add more shoes or ski equipment, you have to remove the clothes to make room, defies what we know (and brag about endlessly) about the brain: it’s built of networks that connect across regions and functions, and these are PLASTIC – what is referred to as “able to rewire itself in reaction to the environment.” This blows apart much of what the article has to say.
  2. Visual thinking is judged to be INFERIOR, low level cognition. Tell that to a raptor, such as a hawk, raven or eagle; to giant squid or octopi and the myriad species which utilize various segments of the electro-magnetic spectrum to perceive the environment. This opinion is based in ignorance and the noises made by the perpetual cheer leaders for Homo sapiens, who believe humans are the pinnacle of evolution, and therefore, whatever “we” do is de facto superior.
  3. Which brings us to the question, if human abilities are superior, why must we compensate for our lack of sensory, cognitive and physical abilities by inventing technology? The average “know-it-all” American CONSUMES the products invented and developed by a handful of creative people in each generation. Knowledge is purchased in the form of “gadgets” that for the most part, do not educate, but distract the average individual from pursuing direct experience and interaction with the environment.
  4. Which means, “we” cognitive masterminds are taking a whole lot of credit for adaptations that are INHERITED from our “inferior, stupid, ancestors” who over the previous 200,000 years, not only survived, but built the culture that made us modern humans –
  5. Which comes to the egregious error of ignoring context: Compare an imaginary modern social human who exists in a context that is utterly dependent on manmade systems that supply food, water, shelter, medical care, economic opportunity, government control, cultural benefits and instant communication with a Neanderthal (or archaic Homo sapiens) whose environment is a largely uninhabited wilderness. One of the favorite clichés of American entertainment is “Male Monsters of Survival” cast into the wilderness (with a film crew and helicopter on call) recreating the Myth of Homo sapiens, Conqueror of Nature. These overconfident males are often lucky to last a week; injuries are common, starvation the norm.
  6. If visual thinking is so inferior, why do hunters rely on airplane and helicopter “flyovers” to locate game, and now drones, and add scopes, binoculars, game cameras,  and a multitude of “sensory substitutes” to their repertoire? Ever been to a sporting goods store? They’re packed with every possible gadget that will improve the DIMINISHED senses and cognitive ability of modern social humans to function outside of manmade environments and to be successful hunters and fishermen.
  7. As for forcing Neanderthals into extinction, modern social humans could accomplish this: we have a horrific history of wiping out indigenous peoples and continue to destroy not only human groups, but hundreds of species and the environments they are adapted to. Modern social humans could bomb Neanderthals “back to the Stone Age”. Kill them off with chemical weapons, shred them with cluster bombs, the overkill of targeted assassination and nuclear weapons.
  8. BUT there is no proof that Archaic Homo sapiens “extincted” Homo Neanderthal. We know that in some areas they lived cheek by jowl, had sex and produced offspring, but modern social humans maintain that Neanderthals were so “socially stupid” that the entire species fell to the magnificence of party-hearty Homo sapiens.  Actually, a modern social human would have difficulty distinguishing the two fearsome types: the challenge may have been like distinguishing a polar bear from a grizzly bear, which are actually both brown bears adapted to different environments. rather irrelevant if you’re facing down either one with a sharp stick.
  9. The myth that Homo sapiens individuals outside of Africa “contain” a variable 1-4% of Neanderthal DNA, with  specific “snips” related to various functions in modern humans, is incomplete. Rarely included in articles about how Homo sapiens and Neanderthal are connected is whole genome sequencing results which show that overall, the Homo sapiens genome, even now, is all but identical to the Neanderthal genome. This is logical: the divergence between the common ancestor of Chimps and  African great Apes (us) occurred 5-6 m.y.a. and yet, the human and chimp genomes share 99% of our DNA. How similar then, is Neanderthal and Denisovan genome to ours? This is a simple math question.
  10. What we need to compare is the Neanderthal genome and the ARCHAIC Homo sapiens genome – two groups of humans who were contemporaries.

 

 

 

Human Footprints / Visual Geology

“Experiential geology” One of my favorite aspects of understanding geologic processes is the direct experience of “seeing” the same type of manifestations of physical “acts” today as were recorded in the rock record as fossil traces, impressions and patterns on specific days and times millions of years ago. Although these are my own refrozen boot prints from previous days, in principle they are no different than any track way made by any ancestral biped. I imagine some familiar, and yet alien creature, under the same sun, walking alone, or with another, and try to envision where they were going, and why, but I can’t assume that their thoughts were like mine: that their experience of the environment and each other was modern in any way. Geology sticks to physical facts, processes, and results. What I experience in these moments is mine; it is not the experience of a creature who “walked similarly to me” millions of years ago.

Perhaps they felt the sun warming their backs, the mud and water squishing between their toes, the effort it took to “not get stuck” and looked around for a less muddy path; maybe they didn’t. There were no “hiking trails”; no camp grounds with electricity and running water; no place to clean up; no home. We really can’t imagine a planet undivided by human landscape schemes; roads, fences, fields and grids; maps and satellite photos. We have a compulsion to “know where we are”.  We really can’t imagine a “human-like brain” that is not likewise divided, reduced and confined by ideas to one prescription for living. One perception of reality.

I get the best of both some days, thanks to having studied geology. A ‘snippet’ of a lost species, who walked like me, enters into my day, but it lived as a natural animal.

My question is always, Did its kind perceive beauty? Not some elaborate description of beauty, but the sensation of “rightness” – proportion, pattern, color, detail in their surroundings; the changes  made by light, by night, by dawn and the pleasure this creates? Or were they simply hungry, anxious, stressed; on guard, uncomfortable and slogging through a muddy stretch of ground toward a bit of shade, wary of ever-present predators?

In a way, I prefer not to know. I’m happy to have my big 4 WD truck parked nearby, equipped with a so-so heater and defroster. A warm house to return to with food waiting in the fridge, and a stove on which to cook it. A natural gas “campfire” and lights. And most of all, hot water on demand to unfreeze my fingers and toes.

______________________________________________________________________________

SEE: https://aspergerhuman.wordpress.com/2017/11/12/geologists-discover-5-7-myo-human-like-footprints-crete/

Charles Lyell (1797-1875), the famed Scottish geologist and paleontologist  befriended the young Charles Darwin and strongly influenced his thought. In particular, Darwin’s reading of Lyell’s Principles of Geology prompted him to think of evolution as a slow process in which small changes gradually accumulate over immense spans of time.

In this founding document of modern geology, Lyell emphasized natural law. It makes sense, he said, to assume that geological processes acting in the past were much the same as those we see today — forces such as sedimentation in rivers, erosion by wind, or deposition of ash and lava by volcanic eruptions. This is the principle of uniformitarianism, the reasonable assumption that the forces that acted in the past are of the same sort as those we see acting today.

In emphasizing these natural processes, he undermined the claims of earlier geologists many of whom had a distinct tendency to explain geological formations in terms of biblical floods. In the same way, Darwin, who took a copy of Lyell’s Principles around the world with him on the voyage of the Beagle, constructed an explanation of the origin of living things in terms of natural processes.

 

Energy use by Eem Neanderthals / Bent Sørensena, Roskilde University

doi:10.1016/j.jas.2009.06.003 Journal of Archaeological Science

Energy use by Eem Neanderthals

http://energy.ruc.dk/Energy%20use%20by%20Eem%20Neanderthals.pdf

Bent Sørensena, Department of Environmental, Social and Spatial Change, Roskilde University, DK 4000 Roskilde, Denmark.

_______________________________________________________________________________

Abstract

An analysis of energy use by Neanderthals in Northern Europe during the mild Eem interglacial period is carried out with consideration of the metabolic energy production required for compensating energy losses during sleep, at daily settlement activities and during hunting expeditions, including transport of food from slain animals back to the settlement. Additional energy sources for heat, security and cooking are derived from fireplaces in the open or within shelters such as caves or huts. The analysis leads to insights not available from archaeological findings that are mostly limited to durable items such as those made of stone: Even during the benign Eem period, Neanderthals faced a considerable heat loss problem. Wearing tailored clothes or some similar measure was necessary for survival. An animal skin across the shoulder would not have sufficed to survive even average cold winter temperatures and body cooling by convection caused by wind. Clothes and particularly footwear had to be sewn together tightly in order to prevent intrusion of water or snow. The analysis of hunting activity evolvement in real time further shows that during summer warmth, transport of meat back to the base settlement would not be possible without some technique to avoid that the meat rots. The only likely technique is meat drying at the killing site, which indicates further skills in Neanderthal societies that have not been identified by other routes of investigation.

_______________________________________________________________________________

1. Introduction and background

The Neanderthals had an average body mass above that of modern humans, a more sturdy bone structure and a lower height. Food was primarily obtained by hunting big game. The aim of the present paper is to explore the energy requirements of Neanderthals during the warm interglacial Eem period (around 125 ky BP), and based on such analysis to discuss the need for clothes and footwear, as well as methods for food conservation and preparation. The climatic environment is taken as that of Northern Europe, using Eem temperature data from Bispingen close to the Neanderthal site Lehringen near Hamburg (Kühl and Litt, 2007). The climatic conditions would be similar in most of Northern Germany, Belgium and Denmark, while Eastern Europe would have slightly colder winters, as would Finland. Some 30 European Neanderthal sites dating from the Eem, with roughly equal shares in Southern, Middle and Northern Europe, are listed by Wenzel (2007). Traces of seasonal presence have been found at Hollerup, Denmark (Møhl-Hansen, 1954) and at Susiluola Cave, Finland (Schulz, 2001, 2006), indicating that Neanderthals had a high mobility. Assuming a group * Correspondence: boson@ruc.dk (Bent Sørensen)
size of 25 (Hassan, 1981), the total European Neanderthal population at any given time within the Eem period could have been around 1000, depending on how many sites were simultaneously populated and which fraction the sites surviving and found are of the true settlement count. Patou-Mathis (2000) lists 73 Eem settlement levels in Middle and Northern Europe, indicating that some sites were re-occupied at different times within the Eem period. Throughout their presence in Europe, the diet of Neanderthals consisted mainly of meat. Large herbivores seem to have been actively hunted rather than scavenged (Bocherens et al., 2005). The Neanderthal hunting strategy was to specialise and concentrate on a few large herbivore mammal species, among which horse (equus sp.) red deer (cervus elaphus), woolly rhinocerous (coelodonta antiquitatis), woolly mammoth (mammuthus prinigenius) and bison (bison priscus) were present both during the Eem interglacial and the adjacent colder periods. During the Eem, additional forestbased species appeared, and the volume of species preferring open space, such as mammoth, was lower than in adjacent periods: mammoth is found in 22.5% of the Eem-occupied site levels considered by Patou-Mathis (2000), but in 50-60% of the levels belonging to adjacent time periods. Mammoth is in this study selected as an example for investigating the energy use involved in Neanderthal hunting, slaying and readying meat for eating, because of its size that demands a maximum of logistic skills by the hunters. However, proof of mammoth presence in North-Western Europe during the Eem period is nearly absent. Mammoth remains have been found in Grotte Scladina, Belgium (in level 4, dated by thermo-luminescence to between 106 and 133 ky BP, and in level 5, dated to 110-150 ky BP), but according to magnetic susceptibility relative dating in the lowest part of the uncertainty intervals and thus possibly younger than the marine isotope stage 5e usually associated with the Eem period (Döppes et al. 2008; Ellwood et al., 2004). The scarcity of mammoth finds at the settlement sites may be explained by only meat, not bones, being carried back to camp after a kill (Patou-Mathis, 2000; Balter and Simon, 2006). A straight-tusked elephant, a species preferring the warmer Eem environment, has been found at Lehringen with a Neanderthal spear through its rib, so it could also have been used as an example of extreme-weight prey. In the assessment made in the present study, the species exemplified is represented by its meat-mass alone, and results (such as energy-use by carrying) scale directly with the mass of the parts transported back to the group. Large herbivores were hunted and killed by sinking spears into their body by a party of several Neanderthals (Wenzel, 2007). Spears were rarely thrown, as deduced from the absence of shoulder and upper arm bone asymmetries characteristic of more recent hunters using spear-throwing techniques (Rhodes and Churchill, 2009). The Neanderthal population density was low enough to make big-game hunting sustainable, and famines due to insufficient food would not normally occur (Harpending, 1998). One similar analysis of needs for clothes and footwear has been made for the marine isotope stage 3 around 40 ky BP (Aiello and Wheeler, 2004), however with some unrealistic features: It uses subjective wind-chill temperatures (Lee, 2001) in the heat-loss expression valid for real temperatures (to which heat loss by the action of wind could have been added in the way it is done below), and it uses an arbitrary increase of the average metabolic rate to three times the basic one. This is suggesting an ability of the Neanderthal body to regulate its metabolism according to ambient temperature (Steegmann et al., 2002), in contrast to the body of modern humans, where this can be done only in a very minor way in infants, through adrenaline stimulation of a special fat deposit in brown adipose tissues (BAT) near the shoulder (Farmer, 2009). Steegmann et al. (2002) suggest that recent Tierra del Fuego Ona (Selk’nam) aborigines had a particular cold adaptation and that the Neanderthals might also have had it. However, no search for BAT in Ona remains has to my knowledge been made, and the photo reproduced in Steegmann et al. (2002) shows people posing for a 1901-picture, dressed in heavy fur and similar hats, but with bare feet or moccasins. To make inferences, one should have time distributions of clothing used and corresponding temperatures, and to
to compare with Neanderthals, the differences between the Ona, deriving their food from the camellike guanaco, from gathering and from fishing, e.g of seal, and the Neanderthal providing food by big-game hunt with walking or running over extended distances and durations should be kept in mind. The Neanderthals would during cold periods be better compared with present or recent Inuit populations, which use several layers of clothing and heavy, furred footwear. Should the Neanderthals really have had a genetic advantage in cold adaptation that modern humans do not have, it becomes more difficult to understand why modern humans and not Neanderthals survived the cooling period up to the last glacial maximum, 40-20 ky BP. Without ad hoc assumptions on the genetic make-up of Neanderthals, one must assume that in order to increase metabolism, muscle work is required, so that a sleeping and freezing Neanderthal person must get up and swing the arms, jump up and down or otherwise perform the muscle work that will bring the level of metabolism up and create the associated heat that can keep the body warm. Generating a total of 300 W of heat (including the basic metabolic heat production during sleep of 80-90 W) requires about 100 W of muscle work (Sørensen, 2004, p. 17). The analysis of energy production and use presented below is divided into two parts: first the energy balance is evaluated during sleep, and then the various components of energy production and use during activities taking up the wake time of Eem Neanderthals.

much more…

________________________________________

Even with the “fat” gene, heavy clothing is essential.

And from genetic studies:

Arctic Inuit, Native American cold adaptations may originate from extinct hominids

https://www.sciencedaily.com/releases/2016/12/161220175552.htm Full Article

December 20, 2016, Molecular Biology and Evolution (Oxford University Press)

Summary:
In the Arctic, the Inuits have adapted to severe cold and a predominantly seafood diet. Now, a team of scientists has followed up on the first natural selection study in Inuits to trace back the origins of these adaptations. The results provide convincing evidence that the Inuit variant of the TBX15/WARS2 region first came into modern humans from an archaic hominid population, likely related to the Denisovans.

Jared Diamond / Hunter-Gatherer Parenting

Note: looking back at having to “grow up” in modern American culture, and the highly negative experience that situation produced, These “HG” practices sound “familiar” in terms of what I felt that I needed as a child, but didn’t get, EXCEPT FROM my Asperger father.  
This “Modern American Model” of reproduction has become HORRIFYING: How many babies are born premature in the US? An estimated 15 million babies are born too early every year. That is more than 1 in 10 babies. Almost 1 million children die each year due to complications of preterm birth. Many survivors face a lifetime of disability, including learning disabilities and visual and hearing problems. World Health Organization

The new normal: Preemies are “So Cute!” And indeed “A Gift from God”

 ________________

Best Practices for Raising Kids? Look to Hunter-Gatherers

From The World Until Yesterday by Jared Diamond

By Jared Diamond On 12/17/12 In NEWSWEEK

How We Hold Them: Constant contact between caregiver and baby may contribute to the child’s improved neuromotor development.

On one of my visits to New Guinea, I met a young man named Enu, whose life story struck me then as remarkable. Enu had grown up in an area where child-rearing was extremely repressive, and where children were heavily burdened by obligations and by feelings of guilt. By the time he was 5 years old, Enu decided that he had had enough of that lifestyle. He left his parents and most of his relatives and moved to another tribe and village, where he had relatives willing to take care of him. There, Enu found himself in an adoptive society with laissez-faire child-rearing practices at the opposite extreme from his natal society’s practices. Young children were considered to have responsibility for their own actions, and were allowed to do pretty much as they pleased. For example, if a baby was playing next to a fire, adults did not intervene. As a result, many adults in that society had burn scars, which were legacies of their behavior as infants.

Both of those styles of child-rearing would be rejected with horror in Western industrial societies today. But the laissez-faire style of Enu’s adoptive society is not unusual by the standards of the world’s hunter-gatherer societies, many of which consider young children to be autonomous individuals whose desires should not be thwarted, and who are allowed to play with dangerous objects such as sharp knives, hot pots, and fires.

I find myself thinking a lot about the New Guinea people with whom I have been working for the last 49 years, and about the comments of Westerners who have lived for years in hunter-gatherer societies and watched children grow up there. Other Westerners and I are struck by the emotional security, self-­confidence, curiosity, and autonomy of members of small-scale societies, not only as adults but already as children. We see that people in small-scale societies spend far more time talking to each other than we do, and they spend no time at all on passive entertainment supplied by outsiders, such as television, videogames, and books. We are struck by the precocious development of social skills in their children. These are qualities that most of us admire, and would like to see in our own children, but we discourage development of those qualities by ranking and grading our children and constantly ­telling them what to do. The adolescent identity crises that plague American teenagers aren’t an issue for hunter-gatherer children. The Westerners who have lived with hunter-gatherers and other small-scale societies speculate that these admirable qualities develop because of the way in which their (HG) children are brought up: namely, with constant security and stimulation, as a result of the long nursing period, sleeping near parents for ­several years, far more social models available to children through ­allo-parenting, far more social stimulation through constant physical contact and proximity of caretakers, instant caretaker responses to a child’s crying, and the minimal amount of physical punishment. (And then there is the outrageous amount of emotional and “psychic” punishment that Americans rely on for “child-training” – the Lab Rat model.)

Keep Them Close

In modern industrial societies today, we follow the rabbit-antelope pattern: the mother or someone else occasionally picks up and holds the infant in order to feed it or play with it, but does not carry the infant constantly; the infant spends much or most of the time during the day in a crib or playpen; and at night the infant sleeps by itself, usually in a separate room from the parents. However, we probably continued to follow our ancestral ape-monkey model throughout almost all of human history, until within the last few thousand years. Studies of modern hunter-gatherers show that an infant is held almost constantly throughout the day, either by the mother or by someone else. When the mother is walking, the infant is held in carrying devices, such as the slings of the !Kung, string bags in New Guinea, and cradle boards in the north temperate zones. Most hunter-gatherers, especially in mild climates, have constant skin-to-skin contact between the infant and its caregiver. In every known society of human hunter-gatherers and of higher primates, mother and infant sleep immediately nearby, usually in the same bed or on the same mat. A cross-cultural sample of 90 traditional human societies identified not a single one with mother and infant sleeping in separate rooms: that current Western practice is a recent invention responsible for the struggles at putting kids to bed that torment modern Western parents. American pediatricians now recommend not having an infant sleep in the same bed with its parents, because of occasional cases of the infant ending up crushed or else overheating; but virtually all infants in human history until the last few thousand years did sleep in the same bed with the mother and usually also with the father, without widespread reports of the dire consequences feared by pediatricians. That may be because hunter-gatherers sleep on the hard ground or on hard mats; a parent is more likely to roll over onto an infant in our modern soft beds.

How They Play: Treating children as qualitatively similar to grown-ups could help them develop into tough and resilient adults. (My father always did this, except in matters of physical safety) 

Even when not sleeping !Kung infants spend their first year of life in skin-to-skin contact with the mother or another caregiver for 90 percent of the time. A !Kung child begins to separate more frequently from its mother after the age of 1 ½, but those separations are initiated almost entirely by the child itself, in order to play with other children. The daily contact time between the !Kung child and caregivers other than the mother exceeds contact time (including contact with the mother) for modern Western children.

One of the commonest Western devices for transporting a child is the stroller, which provides no physical contact between the baby and the caregiver. In many strollers, the infant is nearly horizontal, and sometimes facing backward. Hence the infant does not see the world as its caregiver sees the world. In recent decades in the United States, devices for transporting children in a upright position have been more common, such as baby carriers, backpacks, and chest pouches, but many of those devices have the child facing backward. In contrast, traditional carrying devices, such as slings or holding a child on one’s shoulders, usually place the child vertically upright, facing forward, and seeing the same world that the caregiver sees. The constant contact even when the caretaker is walking, the constant sharing of the caregiver’s field of view, and transport in the vertical position may contribute to !Kung infants being advanced (compared to American infants) in some aspects of their neuromotor development.

In warm climates, it is practical to have constant skin-to-skin contact between a naked baby and a mostly naked mother. That is more difficult in cold climates. Hence about half of traditional societies, mostly those in the temperate zones, swaddle their infants, i.e., wrap the infant in warm fabric and often strap the infant to a cradle board. A Navajo infant spends 60 to 70 percent of its time on a cradle board for the first six months of life. Cradle boards were formerly also common practice in Europe but began to disappear there a few centuries ago.

To many of us moderns, the idea of a cradle board or swaddling is abhorrent—or was, until swaddling recently came back into vogue. The notion of personal freedom means a lot to us, and a cradle board or swaddling undoubtedly does restrict an infant’s personal freedom. We are prone to assume that cradle boards or swaddling retard a child’s development and inflict lasting psychological damage. In fact, there are no personality or motor differences, or differences in age of independent walking, between Navajo children who were or were not kept on a cradle board, or between cradle-­boarded Navajo children and nearby Anglo-­American children. The probable explanation is that, by the age that an infant starts to crawl, the infant is spending half of its day off of the cradle board anyway, and most of the time that it spends on the cradle board is when the infant is asleep. Hence it is argued that doing away with cradle boards brings no real advantages in freedom, stimulation, or neuromotor development. Typical Western children sleeping in separate rooms, transported in baby carriages, and left in cribs during the day are often socially more isolated than are cradle-boarded Navajo children.

There has been a long debate among pediatricians and child psychologists about how best to respond to a child’s crying. Of course, the parent first checks whether the child is in pain or really needs some help. But if there seems to be nothing wrong, is it better to hold and comfort a crying child, or should one put down the child and let it cry until it stops, however long that takes? Does the child cry more if its parents put the child down and walk out of the room, or if they continue to hold it?

Observers of children in hunter-­gatherer societies commonly report that, if an infant begins crying, the parents’ practice is to respond immediately. For example, if an Efe Pygmy infant starts to fuss, the mother or some other caregiver tries to comfort the infant within 10 seconds. If a !Kung infant cries, 88 percent of crying bouts receive a response within 3 seconds, and almost all bouts receive a response within 10 seconds. Mothers respond to !Kung infants by nursing them, but many responses are by non-mothers (especially other adult women), who react by touching or holding the infant. The result is that !Kung infants spend at most one minute out of each hour crying, mainly in crying bouts of less than 10 seconds—half that measured for Dutch infants. Many other studies show that 1-year-old infants whose crying is ignored end up spending more time crying than do infants whose crying receives a response.

Share the Parenting

What about the child-rearing contribution of caregivers other than the mother and the father? In modern Western society, a child’s parents are typically by far its dominant caregivers. The role of “allo-parents”—i.e., individuals who are not the biological parents but who do some caregiving—has even been decreasing in recent decades, as families move more often and over longer distances, and children no longer have the former constant availability of grandparents and aunts and uncles living nearby. This is of course not to deny that babysitters, schoolteachers, grandparents, and older siblings may also be significant caregivers and influences. But allo-parenting is much more important, and parents play a less dominant role, in traditional societies.

In hunter-gatherer bands the allo-­parenting begins within the first hour after birth. Newborn Aka and Efe infants are passed from hand to hand around the campfire, from one adult or older child to another, to be kissed, bounced, and sung to and spoken to in words that they cannot possibly understand. Anthropologists have even measured the average frequency with which infants are passed around: it averages eight times per hour for Efe and Aka Pygmy infants. Hunter-gatherer mothers share care of infants with fathers and allo-parents, including grandparents, aunts, great-aunts, other adults, and older siblings. Again, this has been quantified by anthropologists, who have measured the average number of care-givers: 14 for a 4-month-old Efe infant, seven or eight for an Aka infant, over the course of an observation period of several hours.

Daniel Everett, who lived for many years among the Piraha Indians of Brazil, commented, “The biggest difference [of a Piraha child’s life from an American child’s life] is that Piraha children roam about the village and are considered to be related to and partially the responsibility of everyone in the village.” Yora Indian children of Peru take nearly half of their meals with families other than their own parents. The son of American missionary friends of mine, after growing up in a small New Guinea village where he considered all adults as his “aunts” or “uncles,” found the relative lack of allo-parenting a big shock when his parents brought him back to the United States for high school.

In small-scale societies, the allo-­parents are materially important as additional providers of food and protection. Hence studies around the world agree in showing that the presence of allo-parents improves a child’s chances for survival. But allo-parents are also psychologically important, as additional social influences and models beyond the parents themselves. Anthropologists working with small-scale societies often comment on what strikes them as the precocious development of social skills among children in those societies, and they speculate that the richness of allo-parental relationships may provide part of the explanation.

Similar benefits of allo-parenting operate in industrial societies as well. Social workers in the United States note that children gain from living in extended, multigenerational families that provide allo-parenting. Babies of unmarried low-income American teenagers, who may be inexperienced or neglectful as mothers, develop faster and acquire more cognitive skills if a grandmother or older sibling is present, or even if a trained college student just makes regular visits to play with the baby. The multiple caregivers in an Israeli kibbutz or in a quality day-care center serve the same function. I have heard many anecdotal stories, among my own friends, of children who were raised by difficult parents but who nevertheless became socially and cognitively competent adults, and who told me that what had saved their sanity was regular contact with a supportive adult other than their parents, even if that adult was just a piano teacher whom they saw once a week for a piano lesson.

Give Them More Freedom

How much freedom or encouragement do children have to explore their environment? Are children permitted to do dangerous things, with the expectation that they must learn from their mistakes? Or are parents protective of their children’s safety, and do parents curtail exploration and pull kids away if they start to do something that could be dangerous?

The answer to this question varies among societies. However, a tentative generalization is that individual autonomy, even of children, is a more cherished ideal in hunter-gatherer bands than in state societies, where the state considers that it has an interest in its children, does not want children to get hurt by doing as they please, and forbids parents to let a child harm itself.

That theme of autonomy has been emphasized by observers of many hunter-gatherer societies. For example, Aka Pygmy children have access to the same resources as do adults, whereas in the U.S. there are many adults-only resources that are off-limits to kids, such as weapons, alcohol, and breakable objects. Among the Martu people of the Western Australian desert, the worst offense is to impose on a child’s will, even if the child is only 3 years old. The Piraha Indians consider children just as human beings, not in need of coddling or special protection. In Everett’s words, “They [Piraha children] are treated fairly and allowance is made for their size and relative physical weakness, but by and large they are not considered qualitatively different from adults … This style of parenting has the result of producing very tough and resilient adults who do not believe that anyone owes them anything. Citizens of the Piraha nation know that each day’s survival depends on their individual skills and hardiness … Eventually they learn that it is in their best interests to listen to their parents a bit.”

Some hunter-gatherer and small-scale farming societies don’t intervene when children or even infants are doing dangerous things that may in fact harm them, and that could expose a Western parent to criminal prosecution. I mentioned earlier my surprise, in the New Guinea Highlands, to learn that the fire scars borne by so many adults of Enu’s adoptive tribe were often acquired in infancy, when an infant was playing next to a fire, and its parents considered that child autonomy extended to a baby’s having the right to touch or get close to the fire and to suffer the consequences. Hadza infants are permitted to grasp and suck on sharp knives. Nevertheless, not all small-scale societies permit children to explore freely and do dangerous things.

On the American frontier, where population was sparse, the one-room schoolhouse was a common phenomenon. With so few children living within daily travel distance, schools could afford only a single room and a single teacher, and all children of different ages had to be educated together in that one room. But the one-room schoolhouse in the U.S. today is a romantic memory of the past, except in rural areas of low population density. Instead, in all cities, and in rural areas of moderate population density, children learn and play in age cohorts. School classrooms are age-graded, such that most classmates are within a year of each other in age. While neighborhood playgroups are not so strictly age-segregated, in densely populated areas of large societies there are enough children living within walking distance of each other that 12-year-olds don’t routinely play with 3-year-olds. (An Asperger child develops more quickly intellectually and “temperamentally” than children his or her age, and benefits from being with older children and adults) 

But demographic realities produce a different result in small-scale societies, which resemble one-room schoolhouses. A typical hunter-gatherer band numbering around 30 people will on the average contain only about a dozen preadolescent kids, of both sexes and various ages. Hence it is impossible to assemble separate age-cohort playgroups, each with many children, as is characteristic of large societies. Instead, all children in the band form a single multi-age playgroup of both sexes. That observation applies to all small-scale hunter-gatherer societies that have been studied. In such multi-age playgroups, both the older and the younger children gain from being together. The young children gain from being socialized not only by adults but also by older children, while the older children acquire experience in caring for younger children. That experience gained by older children contributes to explaining how hunter-gatherers can become confident parents already as teenagers. While Western societies have plenty of teenage parents, especially unwed teenagers, Western teenagers are suboptimal parents because of inexperience. However, in a small-scale society, the teenagers who become parents will already have been taking care of children for many years. (The American belief is that any female who is “capable” of becoming pregnant is automatically “ready” to be a good parent. Good mothering is “magically” installed by “God” in this supernatural female “mystique” so popular in the U.S. This is bizarre.)

Another phenomenon affected by multi-age playgroups is premarital sex, which is reported from all well-studied small hunter-gatherer societies. Most large societies consider some activities as suitable for boys, and other activities as suitable for girls. They encourage boys and girls to play separately, and there are enough boys and girls to form single-sex playgroups. But that’s impossible in a band where there are only a dozen children of all ages. Because hunter-gatherer children sleep with their parents, either in the same bed or in the same hut, there is no privacy. Children see their parents having sex. In the Trobriand Islands, one researcher was told that parents took no special precautions to prevent their children from watching them having sex: they just scolded the child and told it to cover its head with a mat. Once children are old enough to join playgroups of other children, they make up games imitating the various adult activities that they see, so of course they have sex games, simulating intercourse.

Either the adults don’t interfere with child sex play at all, or else !Kung parents discourage it when it becomes obvious, but they consider child sexual experimentation inevitable and normal. It’s what the !Kung parents themselves did as children, and the children are often playing out of sight where the parents don’t see their sex games. Many societies, such as the Siriono and Piraha and New Guinea Eastern Highlanders, tolerate open sexual play between adults and children.

What We Can Learn

Let’s reflect on differences in child-rearing practices between small-scale societies and state societies. Of course, there is much variation among industrial state societies today in the modern world. Ideals and practices of raising children differ between the U.S., Germany, Sweden, Japan, and an Israeli kibbutz. Within any given one of those state societies, there are differences between farmers, urban poor people, and the urban middle class and differences from generation to generation within a society.

Nevertheless, there are still some basic similarities among all of those state societies, and some basic differences between state and nonstate societies. State governments have their own separate interests regarding the state’s children, and those interests do not necessarily coincide with the interests of a child’s parents. Small-scale nonstate societies also have their own interests, but a state society’s interests are more explicit, administered by more centralized top-down leadership, and backed up by well-defined enforcing powers. All states want children who, as adults, will become useful and obedient citizens, soldiers, and workers. States tend to object to having their future citizens killed at birth, or permitted to become burned by fires. States also tend to have views about the education of their future citizens, and about their citizens’ sexual conduct.

Naturally, I’m not saying that we should emulate all child-rearing practices of hunter-gatherers. I don’t recommend that we return to the hunter-gatherer practices of selective infanticide, high risk of death in childbirth, and letting infants play with knives and get burned by fires. Some other features of hunter-gatherer childhoods, like the permissiveness of child sex play, feel uncomfortable to many of us, even though it may be hard to demonstrate that they really are harmful to children. Still other practices are now adopted by some citizens of state societies, but make others of us ­uncomfortable—such as having infants sleep in the same bedroom or in the same bed as parents, nursing children until age 3 or 4, and avoiding physical punishment of children.

But some other hunter-gatherer child-rearing practices may fit readily into modern state societies. It’s perfectly feasible for us to transport our infants vertically upright and facing forward, rather than horizontally in a pram or vertically but facing backward in a pack. We could respond quickly and consistently to an infant’s crying, practice much more extensive allo-parenting, and have far more physical contact between infants and caregivers. We could encourage self-­invented play of children, rather than discourage it by constantly providing complicated so-called educational toys.We could arrange for multi-age child playgroups, rather than playgroups consisting of a uniform age cohort. We could maximize a child’s freedom to explore, insofar as it is safe to do so. (In American life and education, the forbidden field of exploration is intellectual investigation, practical analysis and independent thought.)  

But our impressions of greater adult security, autonomy, and social skills in small-scale societies are just impressions: they are hard to measure and to prove. Even if these impressions are real, it’s difficult to establish that they are the result of a long nursing period, allo-­parenting, and so on. At minimum, though, one can say that hunter-gatherer rearing practices that seem so foreign to us aren’t disastrous, and they don’t produce societies of obvious sociopaths. Instead, they produce individuals capable of coping with big challenges and dangers while still enjoying their lives. The hunter-gatherer lifestyle worked at least tolerably well for the nearly 100,000-year history of behaviorally modern humans. Everybody in the world was a hunter-gatherer until the local origins of agriculture around 11,000 years ago, and nobody in the world lived under a state government until 5,400 years ago. The lessons from all those experiments in child-rearing that lasted for such a long time are worth considering seriously.

 

 

 

The final question: What do we do with the pain?

The modern social human separation of “good and evil”

To answer that question, we have to step back to the difficult realization that we don’t make the rules; not for nature; not for the universe. No one likes this ultimate situation. It’s likely that 99% of human effort occurs in opposition to this truth. We separate ourselves from “the animals” in order to deny that we, like them, cannot control “the monster”. Denial of animal identity is denial that we are animals. We are special; we can refuse to “belong” to life; plead our case, prove our worth, worship “them” or “him” or the sun and rain; fire, blood, the magic intercession of words and numbers. To no end.

What makes humans “special”? Merely our insistence that we are. We have yet to prove any such exemplary status.

Early humans had no elaborate delusions: they were animals, and they needed whatever materials, skills, advantages they could grasp. That included the “power” of other animals, each type of which had its special abilities. Who wouldn’t desire to be like the great predators who snatched the life of the animals around them, including the “bipedal” apes? Or the birds that garnered food from far and wide; from high and low, from tree tops and lakes, rivers and the oceans. Or the insects that massed by the thousands, or millions, to build fortresses from their own bodies; who stored food for lean times; who migrated en masse to devour seasonal or random opportunities that presented abundant food that could fuel reproduction.

The animals always seemed to know when and where their food sources would appear. It would have been logical to think that animals “knew things” in advance of events to come; that they “talked” to each other, and acted often in concert. Migrations were obvious patterns that occurred; many clues announced the timing of movements great and small. Rain, no rain, temperature changes, plants blooming or dying off, birds coming and going, clouds of dust and distant “earth” noise, the appearance of predators, the mating rituals of species, the birth of a new generation, and the hard times of drought, die offs, invisible disease, and sudden death that happened to living forms.

Their intelligence wasn’t verbal, that is “conscious”. They didn’t sit around a conference table discussing what to do; taking notes, delegating tasks; dithering over the social consequences to themselves.

Our ancestors asked the wise man or woman of the group; deferred to the elders, whose value was unquestioned: there they were, in the flesh, having survived to great age – proof that they knew what the “rules” of nature are and how to use them to survive.  They could point to drawings or petroglyphs made by ancestors; hold pieces of interesting materials and ponder their usefulness; learn from artifacts containing information old and lost in time, but “still true” – telling of water holes, patches of edible plants, good hunting sites, the location of safe boundaries and where the unknown began. And details of specific events, denoted by marks we no longer understand. We assume that they were “stupid, primitive, naïve” and always on the brink of extinction, because we devalue what they knew: Knowledge must be “written down” to be modern, sophisticated, and on the path to our current magnificence. To satisfy our arrogance, we attribute modern behavior retroactively to these “proto-people” in order to save them from extinction; we must magically ensure that they will become “us” – modern social humans.

A typical modern human asks, How could they possibly have survived without our monstrosity of (ironically) unwritten secret codes of socio-cultural behavior and our modern (overblown and incoherent) written-down religious law; our social systems of behavior control? In reality, a tangle of hierarchy that survives on inequality, brutality, punishment, and enforced by withholding of necessary resources to “subhumans” – the great project of redistribution of the “wealth of the planet” to a few individuals. And at every opportune moment,  genocide, war, and destruction of resources, knowledge, skills, ways of life millennia old; destruction of nature’s millions of manifestations.

Was existence before “word-based civilization” dangerous, brief, terrifying, painful, and totally incomprehensible according to the modern American “vision” of our way of life as being what God, Nature, the Founding Fathers, or a cosmic intelligence intended for us, alone? Of course. We are unable to imagine that “earth” existed before yesterday (American idea of reality) or for even a few millennia of history, which after all, consists solely of pyramid cultures; pot shards and clots of mud detailing trade transactions and the “me-me-me” ravings of sociopaths, who abused other humans – and of course the fantastic “loot” created by prior cultures, which adds to the fantasy that modern people invented everything valuable. Look what we did! 

Inventions trivial, major and life-changing are paraded and conflated with the idea of human genius; the ridiculous assertion that people living today not only “created all this stuff” (instead of inheriting it) but also, that human superiority is confirmed the acts of purchasing “wizard-like” gadgets, by being a passenger or driver of a car, boat or airplane; by purchasing myriad products without effort online; by baking cookies and cupcakes, ordering pizza, and having our “waste” disposed of by someone else. These acts confirm that the “promise” of being awarded “top species status” has been been fulfilled by Americans. 

The wonders and superpowers of the penultimate human brain are magically distributed to individuals who make up an imaginary population called “mankind” (not really; this actually refers to certain (white) people granted “oversight and control” of everything, including nature. 

One of the most astonishing “functions” of modern humans is to “pray for good things” to happen, without actually doing anything to prevent bad things from happening.

I’m sure our pre-civilized ancestors also prayed; but that their attitude was one of  commitment to a personal contribution of skill and courage to a group endeavor. And an awareness that “power” in nature is both dark and light, creative and destructive; mysterious, yes, but also predictable. A game; a challenge; a no-win predicament. But oh so rewarding when the challenge is met.

The question of, What will I do with my share of brief and temporary life, a life that is both dark and light, is up to us to answer within the rules of nature, whether we acknowledge nature’s supremacy or not. It’s also the question that we avoid.

Denying pain as a consequence of being alive, produces more pain, and keeps us addicted to the behavior of denial.

It seems to me that a few human belief systems point to the same conclusion: escape from reality, as generated by our subjective point of view and sentiment, is not possible as long as we believe that ‘nature’ divides pain and pleasure; that we can force nature to provide only what we demand: no pain.

Western civilization, like other great empires, is established on the belief that “pain and evil” can be eliminated by inflicting pain and evil on other people. The “proof” that this is a good idea is the gain in material wealth that results: We end up by creating more pain than even nature requires.

Good and evil cupcakes: neotenic solution to the question of pain – a black and white division of reality.

 

 

 

 

 

Pleistocene Animals / Population Dynamics

The Field Museum of Natural History was a “kid magnet” in my day; I’m sure it still is. I remember the Egyptian mummies, which I didn’t like to be around because the odor they gave off was repulsive. The big dioramas and paintings attracted me, especially the giant fauna of Ice Age animals, which were labeled “extinct”. Some were shown as being hunted by Paleo-Indians; it seemed preposterous (and terribly brave – or an act of desperation) for a human with a wooden spear and stone point to do such a thing. Then there were “cave” animals presented; they looked a lot like contemporary bears and big cats, but bigger and more ferocious. The word “cave” confused me. Being Asperger I took this literally: humans occupied caves, too – did they have to “evict” or kill all the animals living there before moving in? I’m still not sure, but it seems that “cave” doesn’t refer to the animal occupying the cave, but to species that humans depicted in cave paintings. (Language again! So non-specific…)

As a “real” hunting scene, this is idiotic! Wild boar are EXTREMELY dangerous animals! No one would crouch on the ground or stand a few feet in front of one of these fast charging and deadly animals… And what about those “coyote-looking” dogs on leashes? Hmmm… And all that tender white skin?

Diorama ID: Mas d’Azil cave in France. The scene shows two Azilian men armed with wooden spears with flint lance-points at close quarters with an enraged wild boar defending his mate and two young pigs. The dogs are held by rawhide straps and they are straining forward at the leash. The painted background shows the peaks of the Pyrenees in the distance.

For the story of this and other Field Museum “cave men” dioramas:

https://www.fieldmuseum.org/science/blog/what-happened-caveman-dioramas

Le Mas-d’Azil cave, southwestern France, is the typesite for the prehistoric Azilian culture. The Grotte du Mas d’Azil is a “supersite” for human habitation ca. 30,000 years ago, and is also a key site for the Magdalenian culture that preceded it. 

For lots more info & photos see also: http://donsmaps.com/masdazil.html

_________________________________________________________________________________

Full PDF at: http://www.sciencedirect.com/science/article/pii/S0960982209013062

Ecological Change, Range Fluctuations and Population Dynamics during the Pleistocene

Apart from the current human-induced climate change, the Holocene is notable for its stable climate. In contrast, the preceding age, the Pleistocene, was a time of intensive climatic fluctuations, with temperature changes of up to 15°C occurring within a few decades. These climatic changes have substantially influenced both animal and plant populations. Until recently, the prevailing opinion about the effect of these climatic fluctuations on species in Europe was that populations survived glacial maxima in southern refugia and that populations died out outside these refugia. However, some of the latest studies of modern population genetics, the fossil record and especially ancient DNA reveal a more complex picture. There is now strong evidence for additional local northern refugia for a large number of species, including both plants and animals. Furthermore, population genetic analyses using ancient DNA have shown that genetic diversity and its geographical structure changed more often and in more unpredictable ways during the Pleistocene than had been inferred. Taken together, the Pleistocene is now seen as an extremely dynamic era, with rapid and large climatic fluctuations and correspondingly variable ecology. These changes were accompanied by similarly fast and sometimes dramatic changes in population size and extensive gene flow mediated by population movements. Thus, the Pleistocene is an excellent model case for the effects of rapid climate change, as we experience at the moment, on the ecology of plants and animals.

Excerpt: Clearly, these massive climatic and environmental changes significantly influenced the distribution and genetic diversity of plants and animals. The idea that, during times of adverse climate, species track their habitat goes back to Darwin [9], and the Pleistocene should represent an excellent opportunity to test this assumption. Generally, one would assume that Arctic species would expand their distribution southwards during colder times and that temperate species would expand northwards during warmer times. While this is straightforward in North America, with mountain chains, which represent partial barriers to range shifts, running from north to south, in Europe a level of complexity is added with mountain chains running from east to west and the available land mass becoming smaller to the south and being divided into several peninsulas bordering the Mediterranean. This geography, together with numerous studies that found geographical patterns in the genetic diversity of many species consistent with colonization of mid-latitude and northern Europe from the Iberian Peninsula, Italy and the Balkans (for review, see [10,11]) has resulted in the classical ‘refugium theory’, which proposes that temperate species survived the glacial maxima in southern refugia with little gene flow among them and colonized the more northern parts from there during interglacial times. While this model is theoretically sound and correct in many aspects, recent studies on both modern and, especially, ancient DNA diversity have shown that reality is much more complex and only very broadly follows a contraction–expansion model for population dynamics, with many additional processes complicating the picture [12–16].

Finally, the end of the Pleistocene is marked by a massive extinction of large land vertebrates across most of the world (Box 1), with the exception of Africa [17]. Although these extinctions have long been known, their causes remain controversial. While some authors blame humans [18], others deny any human influence, at least on the continents, although human-induced extinctions are widely accepted for islands [19]. Again, recent research has revealed a great deal about the timing and processes of these extinctions, showing that not only mammoths [20,21], but also giant deer (deceivingly known as Irish elk) [22] and some Caribbean ground sloths [23], survived into the Holocene. However, when it comes to the cause(s) of these extinctions, the verdict is still out.

Signature Pleistocene animals.

The Arctic fox (Alopex lagopus) is a small (smaller than the red fox) white or bluish-grey fox that lives today in the arctic northern hemisphere of the Holarctic from Greenland to Iceland and the Arctic regions of North America and Eurasia. During the Pleistocene it had a much wider distribution across the middle part of Europe and western Asia as well as in the large ice-free region of Beringia. It is primarily an inhabitant of the tundra and mountainous regions above the tree line, but it does penetrate into the taiga to some degree. Arctic foxes feed primarily on lemmings, but their diet also includes Arctic hare, eggs, and carrion scavenged from the leftovers of larger predators. A remarkable characteristic is their capability for long distance dispersal, with movements up to 2,000 km.

The brown bear (Ursus arctos) had and still has by far the largest habitat range of all living bear species. Formerly, its habitat extended across North Africa, Europe, the northern and middle parts of Asia and North America from Alaska down to Mexico. Due to intensive human persecution, it is now extinct in many of these areas, including North Africa, large parts of Europe and most of North America. Brown bears are very adaptable and can live on both a mostly herbivorous diet and a mostly carnivorous diet. They are very variable in size and other morphological traits which historically has led to the description of numerous subspecies and even species. Today, all brown bears are considered a single species with a number of subspecies.

Cave bears (Ursus spelaeus) are the close — and less fortunate — cousins of the brown bear. The two species diverged some 1.6 million years ago, with tooth and stable isotope analyses indicating that cave bears were mostly herbivorous. However, recently a population was discovered that shows a stable isotope signature indicating an omnivorous, or even carnivorous, diet. Although in Europe cave bear remains are much more numerous than those of the brown bear, cave bears went extinct some 25,000 years ago. It has recently been shown that cave bears also occurred in Asia up to north-eastern Siberia.

Cave hyenas (Crocuta crocuta spelaea) are close relatives of the living spotted hyenas from Africa. In fact, in mitochondrial DNA sequence trees, sequences of cave and spotted hyenas are quite intermingled, questioning any taxonomic distinction of them as a subspecies or even as a species. Judging by cave paintings, they were probably spotted like modern spotted hyenas in Africa. They lived in Eurasia throughout the Pleistocene and probably already during the late Pliocene, about 3 million years ago. The timing of their extinction is not well established, but may have taken place around the same time as the cave bear, some 25,000 years ago.

The giant deer (Megaloceros giganteus), or Irish elk, is the gigantic relative of the rather gracile fallow deer. Giant deer are not only remarkable for their large body size but also for their huge antlers which could span up to 3.5 meters. Giant deer are often seen as typical representatives of the Pleistocene, but recent research has shown that in the Urals, giant deer survived until at least 7,700 years ago, far into the Holocene.

The woolly mammoth (Mammuthus primigenius) is no doubt the most iconic of all extinct Pleistocene animals. However, the woolly mammoth is only the last representative of a long lineage that had its origin in Africa. The first European mammoth lived in southern Europe and only later did mammoths colonize the arctic regions. Woolly mammoths differ from their closest relatives, the living elephants, in many features, most conspicuously by their curved tusks, the long hair and their small ears and short tails. Tens of thousands of mammoth bones have been recovered from the northern permafrost regions and sometimes even complete frozen carcasses. Mammoths survived into the Holocene, with the last population disappearing from Wrangel Island only about 3,700 years ago.

The steppe bison (Bison priscus) must have been a very common species throughout the Arctic region, especially in Beringia, given the vast numbers of fossils that have been found. Steppe bison were very variable in their morphology, especially with regard to the size of their horns, which were much larger in some individuals than in modern bison. They went extinct in Eurasia, but genetic analyses have established that they were the ancestor of the modern American bison, Bison bison. Their relationship to the European bison, Bison bonasus, is not known.

In this review, we will discuss the dynamics of animal and plant populations during the Pleistocene, trying to outline how populations reacted to the rapid variations in climate. We will restrict our analyses to the northern hemisphere, as the majority of studies on Pleistocene DNA have been done on species from this region.

What kids see today: Neoteny is rampant in American entertainment and education! Thank-you Hollywood for making “creationism” look legitimate!

 

“Long ago, when animals could speak…,” / True?

I don’t venture into “fiction and fantasy” often, except into myth and folklore, which is our only “literature” that extends deep into pre-Christian, pre- “modern social human” overlays onto what once was the world of humans living as an animal embedded in the natural environment.

A theme of folklore worldwide, is that of a time when animals and humans talked to each other; I believe that this was a literal ability for humans to understand animal exchange of information, just as animal species that survive together in ecosystems, “understand” each other’s calls, body language, hormonal states, movements and seasonal behaviors, and co-operate in food-finding, protection and defense.

Scientists today are trying to “recapture” some of this ability and knowledge in observational experiments in the field, where it must be done, since these behaviors are “interactive” responses to the natural environment and the complex group of animals and plants adapted to that environment. The “trouble” is in finding environments that are even remotely free of human alteration and destruction. However, the behaviors of species that are adapting successfully to urban environments also teach us “to read and understand” animal behavior as an ongoing process of animal intelligence.

Labs are also being used to “test” animal communication and learning, but interference and distortion by modern social human preconceptions, unconscious prejudice, and anti-nature, supernatural “ideologies” – beliefs about the status of animals and man, are extremely difficult to remove, as we have seen time after time, in “human” psychology studies and theories.

There were / are individual humans who have “hung on to” relationships with (usually) wild animals, throughout millennia of persecution and extermination policies carried out by increasingly “culturally poverty-stricken” social humans, who display an extreme fear of nature, its physical processes, and its living contents, and tragically, project their own modern social “magical paranoia” of physical phenomena, as hallucinatory manifestations from a nonexistent supernatural domain. This is the state of human perception that has now been declared to be “normal”.

Once the “divide” was made between “wild animals” and domesticated types, which are controllable and exploitable because they are much less intelligent and self-motivated, and as “herd breeds” are no longer able or willing to “defend themselves” from ill-treatment by humans, an “unseen and unacknowledged” domestication of Homo sapiens also took place. Wild humans, just like other wild animals and plants, (and natural and mineral resources), have been targeted for extinction during the recent development of “civilization”.

The relation of Homo sapiens to the environment was shifted by social forces (driven by climate – weather patterns; growing dependence on agriculture – increase in “food” quantity, but decrease in quality – which remains the situation today; population increase due to neotenic sexual selection – possibly a result of bottleneck drops or restrictions in population; and many other factors) from “reasonable survival for all” (a rational conservative strategy) to the exploitation of “slavery of all living things, in service to the predatory few”. This “social journey” has lead to the denial of access, for most humans alive today, to the great resources that were delivered to our ancestor’s curious and artistic-inventive brains by nature – especially by the practice of observing and copying the behaviors of our fellow animals and “appropriating” the active processes and materials all around them, by intuitive insight and persistent “tinkering” within the parameters of intuitive physics.

Animals don’t “talk anymore” because humans don’t listen anymore … to animals or to each other! 

____________________________

Check this out; a wonderful modern visualization of human “integration of animal qualities” as practiced by our ancestors.

http://www.blckdmnds.com/o-hibridismo-animal-nas-fotografias-de-ulric-collett/collette8/

_____________________________

From a lovely website with many illustrations: 

notes from a Dartmoor studio
on folklore, fairy tales, fantasy,
mythic arts & mythic living

by Terri Windling

http://www.terriwindling.com/blog/

“We need another and a wiser and perhaps a more mystical concept of animals. We patronize them for their incompleteness, for their tragic fate of having taken form so far beneath ourselves. For the animal shall not be measured by man. In a world older and more complex than ours, they move finished and complete, gifted with extensions of the senses we have lost or never attained, living by voices we shall never hear. They are not brethren, they are not underlings; they are other nations, caught with ourselves in the net of life and time, fellow prisoners of the splendour and travail of the earth.”  – Henry Beston (The Outermost House)

“How monotonous our speaking becomes when we speak only to ourselves! And how insulting to the other beings – to foraging black bears and twisted old cypresses – that no longer sense us talking to them, but only about them, as though they were not present in our world…Small wonder that rivers and forests no longer compel our focus or our fierce devotion. For we walk about such entities only behind their backs, as though they were not participant in our lives. Yet if we no longer call out to the moon slipping between the clouds, or whisper to the spider setting the silken struts of her web, well, then the numerous powers of this world will no longer address us – and if they still try, we will not likely hear them.”  – David Abram (Becoming Animal)

“Maybe it’s animalness that will make the world right again: the wisdom of elephants, the enthusiasm of canines, the grace of snakes, the mildness of anteaters. Perhaps being human needs some diluting.”  – Carol Emshwiller (Carmen Dog)

Many an old story begins with the words, “Long ago, when animals could speak…,” invoking a time when the boundary lines between the human and the animal worlds were less clearly drawn than they are today, and more easily crossed. Animals play a vibrant role in the earliest stories from around the globe: tales of animal gods and guardians, animal nurses and paramours, animal thieves and tricksters, animal teachers and ancestors. In ancient carvings and pictographs we find numerous representations of the animal kingdom, as well as images of men and women with animal characteristics: stag-men, bird-men, lion-women, snake- women, and other beings both beautiful and monstrous. Shamans and wizards were said to be able to shape-shift into animal form, attaining these powers after spending some time living with animals in the wild — sleeping in wolf dens, traveling with reindeer, learning their speech and their secrets.

Folk tales from around the world tell us that the animals communicate with each other in a language unknown to men and women — or else in a language that used to be known to us, but now is lost. The stories also tell of human beings who understand the speech of animals. Some are born with this ability, while others obtain it through trickery, or magic, or as a gift from the animals themselves, a reward for an act of kindness. In both Europe and Asia, snakes and dragons are closely associated with animal speech. In Norse myth, Siegfried tastes dragon blood and then understands the language of birds; in Arabian myth, one obtains this power by eating the heart of a snake. In eastern Europe, the snake must be white; in France it must be black or green; in Greece, the snake must merely lick the ears of the human supplicant. In some tales, humans blessed with the gift of understanding animal speech must never reveal their possession of it — and often they lose it again when a careless word or laughter betrays them. Madness and the ability to speak the language of animals has often been linked, particularly in shamanic tales where the line between madness and oracular wisdom is blurred.

In tribal traditions from all around the globe, animals are believed to have the power to cause or cure certain illnesses. Animal and their spirits are propitiated through gifts, prayers, song, dance, shamanic rituals, and the use of totemic objects. (I once watched a Tohono O’Odham friend sing to a wild hawk in the mountains near Tucson, slowly drawing the hawk within arms’ length of where he knealt. The song, he said, was “hawk medicine,” passed down in his family.) Animal tales are often told not just as simple entertainments but as teaching stories, or as part of healing rites intended to foster a proper relationship between humankind and the natural world. Today, in our rapidly urbanizing society, this teaching/healing aspect of myth — and, by extension, of Mythic Arts — has become more important than ever, while we stare ecological disaster in the face and while more and more animal species fall under threat of extinction.

Animal myths remind us that we don’t own this earth but share it with others — with our animal “brothers” and “cousins,” as many tribal groups have named them. Some early Greek philosophers argued that animals, too, could reason and love, and thus were no less favored by the gods than human beings. To insist that man was the lord of all, they said, was the height of human arrogance. The Book of Job instructs us to “ask the beasts and they shall teach thee; and the Fowls of the air, and they shall teach thee; or speak to the Earth, and it shall teach thee,” while the Qu’ran says, “there is no beast on earth nor bird which flyeth with its wings but the same is a people like unto you.”

In The Spell of the Sensuous, David Abram writes of the importance of re-learning the language of animals and re-telling the stories that bring us back into a balanced relationship with the natural world. “Human language,” he notes, “arose not only as a means of attunement between persons, but also between ourselves and the animate landscape. The belief that speech is a purely human property was entirely alien to those oral communities that first evolved our various ways of speaking, and by holding to such a belief today we may well be inhibiting the spontaneous activity of language. By denying that birds and other animals have their own styles of speech, by insisting that the river has no real voice and that the ground itself is mute, we stifle our direct experience. We cut ourselves off from the deep meanings in many of our words, severing our language from that which supports and sustains it. We then wonder why we are often unable to communicate even among ourselves.”

The late naturalist John Hay expressed a similar sentiment in his influential book A Beginner’s Faith in Things Unseen: “In a society so estranged from animals as ours,” he said, “we often fail to credit them with any form of language. If we do, it comes under the heading of communication rather than speech. And yet, the great silence we have imposed on the rest of life contains innumerable forms of expression. Where does our own language come from but this unfathomed store that characterizes innumerable species?”

There is much to study in the content of myth and folklore; innumerable clues, like the symbolic bread crumbs that lead to encounters of unknown consequence in children’s tales, clues that document the “heart-breaking” degeneration of the human experience of being alive; ancient vitality which is crushed by modern social regimes.

We have denigrated “what is real” to irrelevance by pretending that 200,000 years of human evolution is only a “pagan fantasy”.  Nature as a continually creative context for human fulfillment has been perverted into a forbidden human adventure. Nature is a “bad place” that produced inherently “bad people”. Modern social humans have elevated “sick” social structures to the “highest and only possible good”; a nightmare universe of pathology and unhappiness has swept across the peoples of the planet. That’s my well-researched opinion, as well as the conclusion of my “intuitive visual” Asperger brain, which “remembers” eternal principles.