Neanderthal mtDNA from before 220,000 y.o. Early Modern Human

Fact or Baloney…read on…

Neandertals and modern humans started mating early

For almost a century, Neandertals were considered the ancestors of modern humans. But in a new plot twist in the unfolding mystery of how Neandertals were related to modern humans, it now seems that members of our lineage were among the ancestors of Neandertals. Researchers sequenced ancient DNA from the mitochondria—tiny energy factories inside cells—from a Neandertal who lived about 100,000 years ago in southwest Germany. They found that this DNA, which is inherited only from the mother, resembled that of early modern humans.

After comparing the mitochondrial DNA (mtDNA) with that of other archaic and modern humans, the researchers reached a startling conclusion: A female member of the lineage that gave rise to Homo sapiens in Africa mated with a Neandertal male more than 220,000 years ago—much earlier than other known encounters between the two groups. Her children spread her genetic legacy through the Neandertal lineage, and in time her African mtDNA completely replaced the ancestral Neandertal mtDNA.

Other researchers are enthusiastic about the hypothesis, described in Nature Communications this week, but caution that it will take more than one genome to prove. “It’s a nice story that solves a cool mystery—how did Neandertals end up with mtDNA more like that of modern humans,” says population geneticist Ilan Gronau of the Interdisciplinary Center Herzliya in Israel. But “they have not nailed it yet.”

 The study adds to a catalog of ancient genomes, including mtDNA as well as the much larger nuclear genomes, from more than a dozen Neandertals. Most of these lived at the end of the species’ time on Earth, about 40,000 to 50,000 years ago. Researchers also have analyzed the complete nuclear and mtDNA genomes of another archaic group from Siberia, called the Denisovans. The nuclear DNA suggested that Neandertals and Denisovans were each other’s closest kin and that their lineage split from ours more than 600,000 years ago. But the Neandertal mtDNA from these samples posed a mystery: It was not like Denisovans’ and was closely related to that of modern humans—a pattern at odds with the ancient, 600,000 year divergence date. Last year Svante Pääbo’s team at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, offered a startling solution: Perhaps the “Neandertal” mtDNA actually came from modern humans.

______________________________

Strange! Everything I’ve read previously has said the Neanderthal mtDna was not at all similar to any H. sapiens mtDna haplogroups. 

______________________________

In the new study, paleogeneticists Johannes Krause and Cosimo Posth of the Max Planck Institute for the Science of Human History in Jena, Germany, test this wild idea with ancient mtDNA from a Neandertal thighbone found in 1937 in the Hohlenstein-Stadel cave (HST) in Germany. Isotopes in animal bones found with the Neandertal suggest that it lived in a woodland known to have vanished at least 100,000 years ago.

Researchers compared the coding region of the HST Neandertal’s mtDNA with that of 17 other Neandertals, three Denisovans, and 54 modern humans. The HST Neandertal’s mtDNA was significantly different even from that of proto-Neandertals that date to 430,000 years ago at Sima de los Huesos in Spain, suggesting that their mtDNA had been completely replaced. But the HST sample was also surprisingly distinct from that of other Neandertals, allowing researchers to build a phylogenetic tree and study how Neandertal mtDNA evolved over time.

Using modern humans’ mtDNA mutation rate to calculate the timing, the researchers conclude that the HST mtDNA split from that of all other Neandertals at least 220,000 years ago. The ancient H. sapiens’ mtDNA must have entered the Neandertal lineage before this time, but after 470,000 years ago, the earliest date for when modern human and Neandertal mtDNA diverged. That’s early enough for the new form of mtDNA to have spread among Neandertals and replaced all their mtDNA.

“The mtDNA of Neandertals is not actually from Neandertals, but from an early modern human from Africa,” Krause says. The researchers speculate that this key mating may have happened in the Middle East, where early H. sapiens may have ventured. Other researchers find the scenario remarkable but plausible. “It seems magical but this type of thing happens all the time … especially if the populations are very small,” Gronau says. For example, the mtDNA in some grizzly bears has been completely replaced by that of polar bears, Krause says.

But some experts say DNA from other Neandertals is needed to prove that their mtDNA was inherited entirely from an early H. sapiens rather than from an ancient ancestor the two groups shared. “Is there other evidence of another [early] mtDNA introgression event?” asks Chris Stringer of the Natural History Museum in London.

Not yet, Posth says. Pääbo is seeking evidence of early gene swapping by trying to get nuclear DNA from the HST Neandertal and others. “We will learn a lot about the population history of Neandertals over the next few years,” he says.

Posted in: Evolution

doi:10.1126/science.aan70

 

Advertisements

Energy use by Eem Neanderthals / Bent Sørensena, Roskilde University

doi:10.1016/j.jas.2009.06.003 Journal of Archaeological Science

Energy use by Eem Neanderthals

http://energy.ruc.dk/Energy%20use%20by%20Eem%20Neanderthals.pdf

Bent Sørensena, Department of Environmental, Social and Spatial Change, Roskilde University, DK 4000 Roskilde, Denmark.

_______________________________________________________________________________

Abstract

An analysis of energy use by Neanderthals in Northern Europe during the mild Eem interglacial period is carried out with consideration of the metabolic energy production required for compensating energy losses during sleep, at daily settlement activities and during hunting expeditions, including transport of food from slain animals back to the settlement. Additional energy sources for heat, security and cooking are derived from fireplaces in the open or within shelters such as caves or huts. The analysis leads to insights not available from archaeological findings that are mostly limited to durable items such as those made of stone: Even during the benign Eem period, Neanderthals faced a considerable heat loss problem. Wearing tailored clothes or some similar measure was necessary for survival. An animal skin across the shoulder would not have sufficed to survive even average cold winter temperatures and body cooling by convection caused by wind. Clothes and particularly footwear had to be sewn together tightly in order to prevent intrusion of water or snow. The analysis of hunting activity evolvement in real time further shows that during summer warmth, transport of meat back to the base settlement would not be possible without some technique to avoid that the meat rots. The only likely technique is meat drying at the killing site, which indicates further skills in Neanderthal societies that have not been identified by other routes of investigation.

_______________________________________________________________________________

1. Introduction and background

The Neanderthals had an average body mass above that of modern humans, a more sturdy bone structure and a lower height. Food was primarily obtained by hunting big game. The aim of the present paper is to explore the energy requirements of Neanderthals during the warm interglacial Eem period (around 125 ky BP), and based on such analysis to discuss the need for clothes and footwear, as well as methods for food conservation and preparation. The climatic environment is taken as that of Northern Europe, using Eem temperature data from Bispingen close to the Neanderthal site Lehringen near Hamburg (Kühl and Litt, 2007). The climatic conditions would be similar in most of Northern Germany, Belgium and Denmark, while Eastern Europe would have slightly colder winters, as would Finland. Some 30 European Neanderthal sites dating from the Eem, with roughly equal shares in Southern, Middle and Northern Europe, are listed by Wenzel (2007). Traces of seasonal presence have been found at Hollerup, Denmark (Møhl-Hansen, 1954) and at Susiluola Cave, Finland (Schulz, 2001, 2006), indicating that Neanderthals had a high mobility. Assuming a group * Correspondence: boson@ruc.dk (Bent Sørensen)
size of 25 (Hassan, 1981), the total European Neanderthal population at any given time within the Eem period could have been around 1000, depending on how many sites were simultaneously populated and which fraction the sites surviving and found are of the true settlement count. Patou-Mathis (2000) lists 73 Eem settlement levels in Middle and Northern Europe, indicating that some sites were re-occupied at different times within the Eem period. Throughout their presence in Europe, the diet of Neanderthals consisted mainly of meat. Large herbivores seem to have been actively hunted rather than scavenged (Bocherens et al., 2005). The Neanderthal hunting strategy was to specialise and concentrate on a few large herbivore mammal species, among which horse (equus sp.) red deer (cervus elaphus), woolly rhinocerous (coelodonta antiquitatis), woolly mammoth (mammuthus prinigenius) and bison (bison priscus) were present both during the Eem interglacial and the adjacent colder periods. During the Eem, additional forestbased species appeared, and the volume of species preferring open space, such as mammoth, was lower than in adjacent periods: mammoth is found in 22.5% of the Eem-occupied site levels considered by Patou-Mathis (2000), but in 50-60% of the levels belonging to adjacent time periods. Mammoth is in this study selected as an example for investigating the energy use involved in Neanderthal hunting, slaying and readying meat for eating, because of its size that demands a maximum of logistic skills by the hunters. However, proof of mammoth presence in North-Western Europe during the Eem period is nearly absent. Mammoth remains have been found in Grotte Scladina, Belgium (in level 4, dated by thermo-luminescence to between 106 and 133 ky BP, and in level 5, dated to 110-150 ky BP), but according to magnetic susceptibility relative dating in the lowest part of the uncertainty intervals and thus possibly younger than the marine isotope stage 5e usually associated with the Eem period (Döppes et al. 2008; Ellwood et al., 2004). The scarcity of mammoth finds at the settlement sites may be explained by only meat, not bones, being carried back to camp after a kill (Patou-Mathis, 2000; Balter and Simon, 2006). A straight-tusked elephant, a species preferring the warmer Eem environment, has been found at Lehringen with a Neanderthal spear through its rib, so it could also have been used as an example of extreme-weight prey. In the assessment made in the present study, the species exemplified is represented by its meat-mass alone, and results (such as energy-use by carrying) scale directly with the mass of the parts transported back to the group. Large herbivores were hunted and killed by sinking spears into their body by a party of several Neanderthals (Wenzel, 2007). Spears were rarely thrown, as deduced from the absence of shoulder and upper arm bone asymmetries characteristic of more recent hunters using spear-throwing techniques (Rhodes and Churchill, 2009). The Neanderthal population density was low enough to make big-game hunting sustainable, and famines due to insufficient food would not normally occur (Harpending, 1998). One similar analysis of needs for clothes and footwear has been made for the marine isotope stage 3 around 40 ky BP (Aiello and Wheeler, 2004), however with some unrealistic features: It uses subjective wind-chill temperatures (Lee, 2001) in the heat-loss expression valid for real temperatures (to which heat loss by the action of wind could have been added in the way it is done below), and it uses an arbitrary increase of the average metabolic rate to three times the basic one. This is suggesting an ability of the Neanderthal body to regulate its metabolism according to ambient temperature (Steegmann et al., 2002), in contrast to the body of modern humans, where this can be done only in a very minor way in infants, through adrenaline stimulation of a special fat deposit in brown adipose tissues (BAT) near the shoulder (Farmer, 2009). Steegmann et al. (2002) suggest that recent Tierra del Fuego Ona (Selk’nam) aborigines had a particular cold adaptation and that the Neanderthals might also have had it. However, no search for BAT in Ona remains has to my knowledge been made, and the photo reproduced in Steegmann et al. (2002) shows people posing for a 1901-picture, dressed in heavy fur and similar hats, but with bare feet or moccasins. To make inferences, one should have time distributions of clothing used and corresponding temperatures, and to
to compare with Neanderthals, the differences between the Ona, deriving their food from the camellike guanaco, from gathering and from fishing, e.g of seal, and the Neanderthal providing food by big-game hunt with walking or running over extended distances and durations should be kept in mind. The Neanderthals would during cold periods be better compared with present or recent Inuit populations, which use several layers of clothing and heavy, furred footwear. Should the Neanderthals really have had a genetic advantage in cold adaptation that modern humans do not have, it becomes more difficult to understand why modern humans and not Neanderthals survived the cooling period up to the last glacial maximum, 40-20 ky BP. Without ad hoc assumptions on the genetic make-up of Neanderthals, one must assume that in order to increase metabolism, muscle work is required, so that a sleeping and freezing Neanderthal person must get up and swing the arms, jump up and down or otherwise perform the muscle work that will bring the level of metabolism up and create the associated heat that can keep the body warm. Generating a total of 300 W of heat (including the basic metabolic heat production during sleep of 80-90 W) requires about 100 W of muscle work (Sørensen, 2004, p. 17). The analysis of energy production and use presented below is divided into two parts: first the energy balance is evaluated during sleep, and then the various components of energy production and use during activities taking up the wake time of Eem Neanderthals.

much more…

________________________________________

Even with the “fat” gene, heavy clothing is essential.

And from genetic studies:

Arctic Inuit, Native American cold adaptations may originate from extinct hominids

https://www.sciencedaily.com/releases/2016/12/161220175552.htm Full Article

December 20, 2016, Molecular Biology and Evolution (Oxford University Press)

Summary:
In the Arctic, the Inuits have adapted to severe cold and a predominantly seafood diet. Now, a team of scientists has followed up on the first natural selection study in Inuits to trace back the origins of these adaptations. The results provide convincing evidence that the Inuit variant of the TBX15/WARS2 region first came into modern humans from an archaic hominid population, likely related to the Denisovans.

Human Adaptation to Cold Environments / Population Growth

I’ve been wondering lately whether or not our assumptions as to Neanderthal, Denisovan and early AMH – Homo sapiens adaptation to Eurasian climates is “logical” in that we assume that adaptation was “highly successful”. What if it wasn’t? 
Looking again at this guesstimate of human population growth, we see that between 100,000 ya (at which time HS pop. is set at “0”) and the “mythic” date 1492, (when supposedly HS pop. was 500 million), the rate of increase was actually pretty dismal. Although HS had migrated to much of the planet, most human population was concentrated in low, hot, coastal environments (and still is). It was only with the fabulous amount of energy supplied by fossil fuels that we succeeded in “invading” both extremely hot and cold environments in any significant and permanent way. “Artificial thermoregulation” has actually resulted in a runaway rate of increase in population growth AND the unforeseen consequence of heating the entire planet. Human artificially cooled and heated environments are not “closed systems” – they are wide open to the surrounding environment. 
 
An analogy might be: Hauling alligators to Siberia, and expecting them to adapt to the cold; a ridiculous expectation: You’d have to change Siberia into Florida. Isn’t that what we’ve begun…? 
 _______________________________________________________________
Temperature (Austin)Published online 2016 Feb 22. doi:  10.1080/23328940.2015.1135688

Human whole body cold adaptation

Introduction

The most widely accepted view of geographic origin and early migration of humans is that they originate from tropical Africa and started to disperse over the world only about 40,000 y ago.1

Since high temperatures dominate in that area, one can assume that at that time humans possessed optimal behavioral and physiological mechanisms to cope with heat and less developed physiological and behavioral mechanisms to cope with cold as encountered in temperate and arctic regions. Even though it is well documented that climatic changes occurred in tropical regions, seasonal variation in ambient temperature is blunted compared to temperate climates and heat stress dominates.2 40,000 y is a relatively short time span in evolutionary terms and it is therefore interesting to investigate if current modern humans are still tropical animals. What mechanisms do we have to cope with cold and do they differ from mechanisms that we supposed to have had 40,000 y ago?

Both tropical and (Ant)Arctic climates are challenging climates for humans due to extreme heat and cold respectively. It is assumed that moderate climates with ambient temperatures of around 21°C need minimal human energy investment in comparison to heat and cold exposure.3 However, it is good to realize that human protection from adverse performance and health outcomes is required already in temperate climates due to daily and seasonal variations in temperature, and not only for temperature extremes.

Since we are not able to compare the population living 40,000 y ago with the current population, we have to make some assumptions in an attempt to make comparisons. One assumption is that humans of the current population of central Africa possess comparable thermoregulatory mechanisms as humans 40,000 y ago. This assumption is defendable since at least part of the African population continued to live under similar climatological circumstances. Therefore we can compare the heat and cold coping mechanisms of the current population of tropical Africa with people living in cold areas for millennia, in order to learn about the adaptivea mechanisms that have occurred. Another way to investigate adaptations is to compare Caucasians to the population originating from Africa that is currently living in colder areas, such as the black Americans.

Finally, experimental studies on repeated exposure to cold may elucidate the mechanisms to acclimatize. It is the purpose of this review to contribute to the discussion if and how humans adapt to cold, including population studies ánd dedicated cold acclimation studies. In this review exposure to cold is categorized as severe, moderate and mild according to the thermal stressor that includes both the medium (at a given temperature cold water exposure is more severe that cold air exposure) and the temperature of the medium. Thermal strain is the reaction of the body to the cold exposure often quantified by core body temperature.

The human thermoregulatory system relies on behavior and on physiological responses for thermal homeostasis.10 Our physiological mechanisms are limited: basically, thermal balance in humans is maintained by vasodilation/vasoconstriction of the skin and peripheral tissues within the so-called thermo-neutral zone.11 We have one extra physiological mechanism in the heat (sweat evaporation) and 2 extra mechanisms in the cold (shivering- [ST] and nonshivering-thermogenesis [NST]). Humans are good sweaters with maximal values observed exceeding 3.5 l/hour.12 Since the heat of vaporization of water is high, this leads to a cooling power of over 2500 W! Moreover, the sweating capacity adapts very well to the demand: 10 weeks of heat acclimation can double sweat production.13 This acquired additional cooling power is maintained for several weeks, even when not exposed to heat anymore.14 In conclusion, our thermal response to heat and our adaptation capabilities to heat are well developed. This review will focus on our capabilities to counteract cold exposure, which are less effective, at least on the long term.

Human adaptation to thermal extremes is not only an academic question, but important to assess the impact of climate change on mortality and morbidity.4 It is predicted that we will face more thermal extremes in the future, and the role of adaptation is essential to understand its impact. Some studies even predict the extinction of human populations that live in extremely hot climates in a few decades,5 but they hardly take human adaptation into account. On the other side, it is not unlikely that Northern Europe may experience cooling due to the thermohaline circulation6 and then it is good to know if and to what extent we can adapt to cold. Another important question is if workers are better protected against cold after repeated cold exposure. Occupational work is expected to increase in cold areas due to the exploration of natural gas (over 30% of world gas reserve is located in the Arctic area) and the opening of the waterway north of Russia. Similar questions arise in the area of sports, where running, skiing or skating in extreme cold is increasingly popular: does it have any benefits to expose oneself to cold prior to the sports event in order to be optimally prepared? Therefore, this review focuses on the capability of humans to adapt to cold.

First the basic mechanisms to cope with cold will be discussed, followed by differences between populations living in hot and in cold areas. In most reviews on cold adaptation7-10 studies regarding population differences are intertwined with acclimation studies (in line with the definition in the Glossary of terms11). This may lead to confusion and therefore this review starts with discussing the results of population studies followed by studies on acclimation to cold. When required, small excursions will be made to the effect of heat exposure on humans. A recent review on heat adaptation provides extended information on adaptation to heat.12

____________________________________

Of course, this “physics” problem is very complicated….

Arena ‘sand, sand-strewn place of combat.’ / Another Day in the Good Ole USA

The Ancient Romans were not so far removed from nature that they had forgotten Nature, the platform, in which Homo sapiens had arisen. Nature as the original  container for human, plant and animal vitality, became a place of nostalgia, in which all creatures, forces, pain and pleasure had been “inflated” to heroic scale, with humans having to sort out what life would be like in a new paradigm.

The changes that were “inevitable” due to a settled life were drastic. I have posted many times about the transition from nomadism, as a here-and-now challenge to survival, which demands an opportunistic and fluid visual-sensory intelligence. The  success of the group depends on each member being actively engaged with the environment, and making contributions to the knowledge base that is absolutely vital.  This lifestyle persisted for all of human history, until the advent of “village life” with increasingly, strangers having to live and work together.

“Tied to the land… and to each other.”

This period has been romanticized as the Happy Peasants Life throughout written history, from the devoted laborers of Pharaoh’s time, to drunken and chubby Medieval types hauling in the harvest, to millions of Asian workers, inserting rice plants one by one in vast idyllic ponds. It’s literally back-breaking labor. And agriculture is no guarantee for survival. Ancient texts are dominated by natural disaster, drought, raiding insects, plagues of rodents and disease: mass starvation.

We will never know exactly what happened, because we have cast agriculture, and the consequences, into “The Good” that defines everything that was left behind as “The Evil” – which includes everything that occurred for 4 billion years before human ascendancy as a civilized species. But, we do know that the change in lifestyle was so drastic that almost immediately, strict systems of behavior control were instituted, by force, wherever agriculture became the means to “feed and water” domestic animals, with  humans included in the “new” category of living things that are “not wild”. Genetic alterations through selective breeding were intentional, practical and effective, despite humans knowing nothing about DNA or evolution.

Homo sapiens was not a “master species that tamed nature” but an integral part of a package of transformation of “wild species” into a  complex of domestic animals and plants, which in hindsight, can be seen as “neotenic invasive species complex” which has caused irrevocable destruction of pre-existing “ecosystems”. And at a rate that is impossible for evolutionary processes to compensate for as adaptations in wild species. This further eliminates “wild diversity versus domestic poverty”, reducing surviving species to those that can tolerate toxic human spaces.

This destruction has occurred wherever the “domesticated menagerie” has been transported and introduced. And yet, despite the overwhelming evidence for catastrophe, the modern human response is “more of the same” – an unshakeable belief in the paradigm of Homo sapiens as the Master Species, fuels a frantic effort to dominate nature with ever-increasing application of technologies.

This includes a last desperate effort to “do away with” organic evolution altogether. Cyborgs, robots, self-creating AI machines, and drastic genetic intervention and control of DNA as the foundation of “new” Master Species is well underway. 

The enormous labor required for successful agriculture, the intensely crowded town and urban life that resulted, the infrastructure necessary to control animal behavior, human behavior, water supplies and distribution of resources, and the predation by parasites, insects, rodents, opportunistic animals of all types, and other humans, was a situation that readily created “top predators” among the wild human remains of the natural environment.

Top male humans easily moved in to “take over” these new agricultural systems, because, as “wild humans” they were more intelligent, and became even more so, as they flourished at the apex of a new hierarchy.

Back to the Ancient Romans: I think that we are intensely attracted to “Rome” the ambiguous culture, because it demonstrates the emerging duality of wild vs. domestic; a time when these forces were still commingled and interactive; a context that  provided for the dynamic inventions of roman civic and practical infrastructure that inspires us today, but the balance shifted, which eventually led to a fatal separation in the human perception of reality: the crippling division between the manmade environment (and labeling it superior) and the natural environment (and designating it a place of evil) that was to follow. Even more catastrophic was the triumph of Christian insistence that only a “magical” supernatural neotenic hallucination exists)

The “modern social” perception of (absolute) “reality” is stuck in the theater of perpetual conflict between “Good and Evil”, a conceptual structure that demands that black and white judgements of value be made about even the most trivial phenomena. This delusion is pervasive. It leads inevitably to each group, or even each individual, being entrenched and isolated, and forced to defend their position as “Absolute Good” (not as rational, or “good for” the family, community, or nation – and certainly not the least concerned with the health and safety of other life on the planet) and everyone and everything else as “Absolute Evil”.

This impossible situation is a trap laid by a belief system that intensifies human hatred day by day.  

Armageddon becomes a “reasonable” solution. Nuclear war becomes inevitable. 

The Roman “arena” was the recreation of nature as the “nostalgic” origin of human competence and accomplishment, however brutal their “memory” of that wild period seems to us. Our arena is devoid of vitality: the artificial estrangement from nature makes people go mad.

We label “whole individuals” as worthless, whole groups as “evil” and insist that the last human standing after total destruction will be the “winner”. 

This is where we are: this madness is the structural imperative of a  truly insane “domestic” animal.  

 

.

 

Thinking and Calories / Supernatural NT “Science”

SciAm Mind

This is one of those articles that sets up the problem (in a rambling sort of way) rather than answering the question posed. The question of how many calories are “burned” during specific brain activities would seem to be no more difficult than measuring – calculating calories burned during specific “physical” activities (brain activity is physical!) Apparently not. The question is, why?

What this whole problem boils down to, is that our  brain is smarter than we are!

Does Thinking Really Hard Burn More Calories?

Unlike physical exercise, mental workouts probably do not demand significantly more energy than usual. Believing we have drained our brains, however, may be enough to induce weariness

By Ferris Jabr on July 18, 2012

Temporary mental exhaustion is a genuine and common phenomenon, which, it is important to note, differs from chronic mental fatigue associated with regular sleep deprivation and some medical disorders. Everyday mental weariness makes sense, intuitively. (Does it? Or are we taught to believe this?) Surely complex thought and intense concentration require more energy than routine mental processes. (Which processes are “routine?) Just as vigorous exercise tires our bodies, intellectual exertion should drain the brain. (Oh dear – that mythological neurotypical brain / body split again!) What the latest science reveals, however, is that the popular notion of mental exhaustion is too simplistic. The brain continuously slurps up huge amounts of energy for an organ of its size, regardless of whether we are tackling integral calculus or clicking through the week’s top 10 LOL cats. x

Although firing neurons summon extra blood, oxygen and glucose, any local increases in energy consumption are tiny compared with the brain’s gluttonous baseline intake. So, in most cases, short periods of additional mental effort require a little more brainpower than usual, but not much more. Most laboratory experiments, however, have not subjected volunteers to several hours’ worth of challenging mental acrobatics. (Why not?) And something must explain the feeling of mental exhaustion, even if its physiology differs from physical fatigue. (WOW! Two types of  “physical fatigue” 1. MENTAL (suspiciously separate from “the body” – supernatural – as opposed to 2. REAL physical, physical fatigue!) This distinction may seem “picky” but this conceptual separation follows from the “duality” of body and mind that is an and ongoing superstition  in NT thinking,) Simply believing that our brains have expended a lot of effort might be enough to make us lethargic. (How?)

Brainpower

Although the average adult human brain weighs about 1.4 kilograms, only 2 percent of total body weight, it demands 20 percent of our resting metabolic rate (RMR)—the total amount of energy our bodies expend in one very lazy day of no activity. RMR varies from person to person depending on age, gender, size and health. If we assume an average resting metabolic rate of 1,300 calories, then the brain consumes 260 of those calories just to keep things in order. That’s 10.8 calories every hour or 0.18 calories each minute. (For comparison’s sake, see Harvard’s table of calories burned during different activities). With a little math, we can convert that number into a measure of power: —Resting metabolic rate: 1300 kilocalories, or kcal, the kind used in nutrition —1,300 kcal over 24 hours = 54.16 kcal per hour = 15.04 gram calories per second —15.04 gram calories/sec = 62.93 joules/sec = about 63 watts —20 percent of 63 watts = 12.6 watts So a typical adult human brain runs on around 12 watts—a fifth of the power required by a standard 60 watt lightbulb. Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. IBM’s Watson, the supercomputer that defeated Jeopardy! champions, depends on ninety IBM Power 750 servers, each of which requires around one thousand watts. x

Energy travels to the brain via blood vessels in the form of glucose, which is transported across the blood-brain barrier and used to produce adenosine triphosphate (ATP), the main currency of chemical energy within cells. Experiments with both animals and people have confirmed that when neurons in a particular brain region fire, local capillaries dilate to deliver more blood than usual, along with extra glucose and oxygen. This consistent response makes neuroimaging studies possible: functional magnetic resonance imaging (fMRI) depends on the unique magnetic properties of blood flowing to and from firing neurons. Research has also confirmed that once dilated blood vessels deliver extra glucose, brain cells lap it up.

Extending the logic of such findings, (Uh-oh – is that “logic” or NT speculation ahead? The following is a chain of assumptions that may “seem obvious” to NTs, but that exercise is not “logic”) some scientists have proposed the following: if firing neurons require extra glucose, then especially challenging mental tasks should decrease glucose levels in the blood and, likewise, eating foods rich in sugars should improve performance on such tasks. Although quite a few studies have confirmed these predictions, the evidence as a whole is mixed and most of the changes in glucose levels range from the miniscule to the small. In a study at Northumbria University, for example, volunteers that completed a series of verbal and numerical tasks showed a larger drop in blood glucose than people who just pressed a key repeatedly. In the same study, a sugary drink improved performance on one of the tasks, but not the others. At Liverpool John Moores University volunteers performed two versions of the Stroop task, in which they had to identify the color of ink in which a word was printed, rather than reading the word itself: In one version, the words and colors matched—BLUE appeared in blue ink; in the tricky version, the word BLUE appeared in green or red ink. Volunteers who performed the more challenging task showed bigger dips in blood glucose, which the researchers interpreted as a direct cause of greater mental effort. Some studies have found that when people are not very good at a particular task, they exert more mental effort and use more glucose and that, likewise, the more skilled you are, the more efficient your brain is and the less glucose you need. Complicating matters, at least one study suggests the opposite—that more skillful brains recruit more energy.* (What this laundry list of non-satisfying “results” indicates is poor experimental design and  too many clichéd assumptions, not enough “objective physics” questions considered. This question of calories – energy use is “hard science”!)

Not so simple sugars

Unsatisfying and contradictory findings from glucose studies underscore that energy consumption in the brain is not a simple matter of greater mental effort sapping more of the body’s available energy. Claude Messier of the University of Ottawa has reviewed many such studies. He remains unconvinced that any one cognitive task measurably changes glucose levels in the brain or blood. “In theory, yes, a more difficult mental task requires more energy because there is more neural activity,” he says, “but when people do one mental task you won’t see a large increase of glucose consumption as a significant percentage of the overall rate. The base level is quite a lot of energy—even in slow-wave sleep with very little activity there is still a high baseline consumption of glucose.” Most organs do not require so much energy for basic housekeeping. But the brain must actively maintain appropriate concentrations of charged particles across the membranes of billions of neurons, even when those cells are not firing. (Like a military “readiness posture” in peacetime) Because of this expensive and continuous maintenance, the brain usually has (maintains a small energy surplus) the energy it needs for a little extra work. Authors of other review papers have reached similar conclusions. Robert Kurzban of the University of Pennsylvania points to studies showing that moderate exercise improves people’s ability to focus. In one study, for example, children who walked for 20 minutes on a treadmill performed better on an academic achievement test than children who read quietly before the exam. If mental effort and ability were a simple matter of available glucose, then the children who exercised—and burnt up more energy—should have performed worse than their quiescent peers. The influence of a mental task’s difficulty on energy consumption “appears to be subtle and probably depends on individual variation in effort required, engagement and resources available, which might be related to variables such as age, personality and gluco-regulation,” wrote Leigh Gibson of Roehampton University in a review on carbohydrates and mental function. Both Gibson and Messier conclude that when someone has trouble regulating glucose properly—or has fasted for a long time—a sugary drink or food can improve their subsequent performance on certain kinds of memory tasks. But for most people, the body easily supplies what little extra glucose the brain needs for additional mental effort. (That is, almost all mental tasks are simply not taxing in terms of additional energy needed)

Body and mind

(Are NTS ever going to grasp that this duality is imaginary? It’s all one system!)

If challenging cognitive tasks consume only a little more fuel than usual, what explains the feeling of mental exhaustion following the SAT or a similarly grueling mental marathon? (This is a cultural statement that reflects the anti-intellectual bias of Americans. If American education were ADEQUATE, the SAT or any other test, would not be “grueling”! Scores are as much a “test” of educational failure on the part of the Ed system as they are measures of student learning. The two cannot be separated) One answer is that maintaining unbroken focus or navigating demanding intellectual territory for several hours really does burn enough energy to leave one feeling drained, but that researchers have not confirmed this because they have simply not been tough enough on their volunteers. (Ditto above! Standards and requirements that in ‘foreign” countries that are everyday expectations are considered by Americans to be “cruel and unusual punishment) x

In most experiments, participants perform a single task of moderate difficulty, rarely for more than an hour or two. “Maybe if we push them harder, and get people to do things they are not good at, we would see clearer results,” Messier suggests. Equally important to the duration of mental exertion is one’s attitude toward it. Watching a thrilling biopic with a complex narrative excites many different brain regions for a good two hours, (Aye, yai, yai! Is this imaginary anecdotal “audience” to be considered as a legitimate “control group” for all these other studies?)  yet people typically do not shamble out of the theater complaining of mental fatigue. Some people regularly curl up with densely written novels that others might throw across the room in frustration. Completing a complex crossword or sudoku puzzle on a Sunday morning does not usually ruin one’s ability to focus for the rest of the day—in fact, some claim it sharpens their mental state. In short, people routinely enjoy intellectually invigorating activities without suffering mental exhaustion. Such fatigue seems much more likely to follow sustained mental effort that we do not seek for pleasure—such as the obligatory SAT—especially when we expect that the ordeal will drain our brains. If we think an exam or puzzle will be difficult, it often will be. Studies have shown that something similar happens when people exercise and play sports: a large component of physical exhaustion is in our heads. In related research, volunteers that cycled on an exercise bike following a 90-minute computerized test of sustained attention quit pedaling from exhaustion sooner than participants that watched emotionally neutral documentaries before exercising. Even if the attention test did not consume significantly more energy than watching movies, the volunteers reported feeling less energetic. That feeling was powerful enough to limit their physical performance. In the specific case of the SAT, something beyond pure mental effort likely contributes to post-exam stupor: stress. After all, the brain does not function in a vacuum. (Brilliant!) Other organs burn up energy, too. Taking an exam that partially determines where one will spend the next four years is nerve-racking enough to send stress hormones swimming through the blood stream, induce sweating, quicken heart rates and encourage fidgeting and contorted body postures. The SAT and similar trials are not just mentally taxing—they are physically exhausting, too. (Again – mental activity is held to be “supernatural” – meaning that it takes place in a dimension outside of physical reality) (A SOCIAL PROBLEM: maybe we could ask, “How much energy is being wasted by Americans in dealing with social stress; energy that could fuel learning and other “satisfying” and health-promoting activity?) 

Blah, blah, blah! NONE of this is relevant to determining how much energy the brain needs or consumes during specific activities

A small but revealing study suggests that even mildly stressful intellectual challenges change our emotional states and behaviors, even if they do not profoundly alter brain metabolism. Fourteen female Canadian college students either sat around, summarized a passage of text or completed a series of computerized attention and memory tests for 45 minutes before feasting on a buffet lunch. Students who exercised their brains helped themselves to around 200 more calories than students who relaxed. Their blood glucose levels also fluctuated more than those of students who just sat there, but not in any consistent way. Levels of the stress hormone cortisol, however, were significantly higher in students whose brains were busy, as were their heart rates, blood pressure and self-reported anxiety. In all likelihood, these students did not eat more because their haggard brains desperately needed more fuel; rather, they were stress eating. Messier has related explanation for everyday mental weariness: “My general hypothesis is that the brain is a lazy bum,” he says. “The brain has a hard time staying focused on just one thing for too long. It’s possible that sustained concentration creates some changes in the brain that promote avoidance of that state. It could be like a timer that says, ‘Okay you’re done now.’ Maybe the brain just doesn’t like to work so hard for so long.”

Here’s a hypothesis:

All this rambling “supernatural” conjecture that encumbers “so-called analysis” of human physical reality, wastes vast amounts of time and energy, but gets us NOWHERE in useable research. What it does is create a prolong a bunch of “belief in  nonsense”  that prevents productive lines of questioning and legitimate research and persists in orienting our thinking to rely on cultural clichés.    

 

The final question: What do we do with the pain?

The modern social human separation of “good and evil”

To answer that question, we have to step back to the difficult realization that we don’t make the rules; not for nature; not for the universe. No one likes this ultimate situation. It’s likely that 99% of human effort occurs in opposition to this truth. We separate ourselves from “the animals” in order to deny that we, like them, cannot control “the monster”. Denial of animal identity is denial that we are animals. We are special; we can refuse to “belong” to life; plead our case, prove our worth, worship “them” or “him” or the sun and rain; fire, blood, the magic intercession of words and numbers. To no end.

What makes humans “special”? Merely our insistence that we are. We have yet to prove any such exemplary status.

Early humans had no elaborate delusions: they were animals, and they needed whatever materials, skills, advantages they could grasp. That included the “power” of other animals, each type of which had its special abilities. Who wouldn’t desire to be like the great predators who snatched the life of the animals around them, including the “bipedal” apes? Or the birds that garnered food from far and wide; from high and low, from tree tops and lakes, rivers and the oceans. Or the insects that massed by the thousands, or millions, to build fortresses from their own bodies; who stored food for lean times; who migrated en masse to devour seasonal or random opportunities that presented abundant food that could fuel reproduction.

The animals always seemed to know when and where their food sources would appear. It would have been logical to think that animals “knew things” in advance of events to come; that they “talked” to each other, and acted often in concert. Migrations were obvious patterns that occurred; many clues announced the timing of movements great and small. Rain, no rain, temperature changes, plants blooming or dying off, birds coming and going, clouds of dust and distant “earth” noise, the appearance of predators, the mating rituals of species, the birth of a new generation, and the hard times of drought, die offs, invisible disease, and sudden death that happened to living forms.

Their intelligence wasn’t verbal, that is “conscious”. They didn’t sit around a conference table discussing what to do; taking notes, delegating tasks; dithering over the social consequences to themselves.

Our ancestors asked the wise man or woman of the group; deferred to the elders, whose value was unquestioned: there they were, in the flesh, having survived to great age – proof that they knew what the “rules” of nature are and how to use them to survive.  They could point to drawings or petroglyphs made by ancestors; hold pieces of interesting materials and ponder their usefulness; learn from artifacts containing information old and lost in time, but “still true” – telling of water holes, patches of edible plants, good hunting sites, the location of safe boundaries and where the unknown began. And details of specific events, denoted by marks we no longer understand. We assume that they were “stupid, primitive, naïve” and always on the brink of extinction, because we devalue what they knew: Knowledge must be “written down” to be modern, sophisticated, and on the path to our current magnificence. To satisfy our arrogance, we attribute modern behavior retroactively to these “proto-people” in order to save them from extinction; we must magically ensure that they will become “us” – modern social humans.

A typical modern human asks, How could they possibly have survived without our monstrosity of (ironically) unwritten secret codes of socio-cultural behavior and our modern (overblown and incoherent) written-down religious law; our social systems of behavior control? In reality, a tangle of hierarchy that survives on inequality, brutality, punishment, and enforced by withholding of necessary resources to “subhumans” – the great project of redistribution of the “wealth of the planet” to a few individuals. And at every opportune moment,  genocide, war, and destruction of resources, knowledge, skills, ways of life millennia old; destruction of nature’s millions of manifestations.

Was existence before “word-based civilization” dangerous, brief, terrifying, painful, and totally incomprehensible according to the modern American “vision” of our way of life as being what God, Nature, the Founding Fathers, or a cosmic intelligence intended for us, alone? Of course. We are unable to imagine that “earth” existed before yesterday (American idea of reality) or for even a few millennia of history, which after all, consists solely of pyramid cultures; pot shards and clots of mud detailing trade transactions and the “me-me-me” ravings of sociopaths, who abused other humans – and of course the fantastic “loot” created by prior cultures, which adds to the fantasy that modern people invented everything valuable. Look what we did! 

Inventions trivial, major and life-changing are paraded and conflated with the idea of human genius; the ridiculous assertion that people living today not only “created all this stuff” (instead of inheriting it) but also, that human superiority is confirmed the acts of purchasing “wizard-like” gadgets, by being a passenger or driver of a car, boat or airplane; by purchasing myriad products without effort online; by baking cookies and cupcakes, ordering pizza, and having our “waste” disposed of by someone else. These acts confirm that the “promise” of being awarded “top species status” has been been fulfilled by Americans. 

The wonders and superpowers of the penultimate human brain are magically distributed to individuals who make up an imaginary population called “mankind” (not really; this actually refers to certain (white) people granted “oversight and control” of everything, including nature. 

One of the most astonishing “functions” of modern humans is to “pray for good things” to happen, without actually doing anything to prevent bad things from happening.

I’m sure our pre-civilized ancestors also prayed; but that their attitude was one of  commitment to a personal contribution of skill and courage to a group endeavor. And an awareness that “power” in nature is both dark and light, creative and destructive; mysterious, yes, but also predictable. A game; a challenge; a no-win predicament. But oh so rewarding when the challenge is met.

The question of, What will I do with my share of brief and temporary life, a life that is both dark and light, is up to us to answer within the rules of nature, whether we acknowledge nature’s supremacy or not. It’s also the question that we avoid.

Denying pain as a consequence of being alive, produces more pain, and keeps us addicted to the behavior of denial.

It seems to me that a few human belief systems point to the same conclusion: escape from reality, as generated by our subjective point of view and sentiment, is not possible as long as we believe that ‘nature’ divides pain and pleasure; that we can force nature to provide only what we demand: no pain.

Western civilization, like other great empires, is established on the belief that “pain and evil” can be eliminated by inflicting pain and evil on other people. The “proof” that this is a good idea is the gain in material wealth that results: We end up by creating more pain than even nature requires.

Good and evil cupcakes: neotenic solution to the question of pain – a black and white division of reality.

 

 

 

 

 

Difficult to establish causality in gender studies / ScienceNordic

http://sciencenordic.com/what-we-don’t-know-about-gender-differences-brain

What we (don’t) know about gender differences in the brain

November 2, 2017 
COMMENT: A Google engineer was recently laid off for writing that biological differences explain why more men than women work in the tech industry. A neurobiologist and science journalist in Denmark agreed with him – but are they right?
_______________________________

There is more to this article: I’ve picked out the following topic because it has huge bearing on the “validity” of claims by American psychologists that their “opinions” (ideological interpretations) have something to do with actual science. Believe it or not, there are psychologists who defend psychology as a “science” by claiming that there really is no one definition of science or the scientific method; ergo it’s whatever they say it is!

The Nine Levels of Scientific Hell, by SHRO

http://xkcdsw.com/3681

___________________________

Difficult to establish causality in gender studies

It requires controlled experiments to say – with some degree of certainty – whether gender differences are biologically or culturally determined. Only through experimentation do we have control over the relevant variables, which allows us to determine causality.

This is a challenge when it comes to gender differences, because we cannot manipulate the gender (independent variable) and study how this manipulation affects the behaviour (dependent variable).

An example of the type of spatial task where the largest gender difference in cognition has been documented (mental rotation). The test subject must determine if the figures are identical or mirrored. (Figure: Christian Gerlach)

We can only study how the genders differ in regards to behaviour. That makes it difficult to determine whether the differences are caused by biology or culture. Thus, gender studies are based on non-experimental research, which is poor at establishing causality.

There have been some attempts, though, at demonstrating a relationship between testosterone levels in embryos and gender differences in preferences and behaviour. One study shows that higher levels of testosterone leads to more “boyish” behaviour (violent) during playtime. Another study could not replicate these findings.

This inconsistency is prevalent within the field, which is also host to a full range of interpretation issues. We still do not – and cannot – know if the testosterone directly affects the brain and causes the change in observed behaviour.

It might just as well be caused by testosterone increasing body volume, which then leads to more ‘violent’ games. Normally, it is not possible to measure the amount of testosterone directly in the blood (in embryos). Rather, this is done by measuring the amniotic fluids. But these measurements do not necessarily correlate.

Read More: What happens to girls and boys in gender-neutral preschools?

Animal studies and hormonal imbalances are also unreliable

Considering this, we can look for other approaches. We could study people with hormonal imbalances. Some studies, but not all, find that female embryos with abnormal amounts of testosterone (congenital adrenal hyperplasia) grow up to have more “boyish” behaviour or preferences. (This is often presented as a “biological fact” by certain autism “experts”, when there is no proof that “boyish behavior or preferences” is “located in the brain” rather than culturally-determined. They assume that the dominant (male) cultural bias is scientifically valid and begin “the inquiry” at this point, without question – 

Here it is important to remember that it is always difficult to establish the ‘normal’ based on the ‘abnormal.’ For that reason it is an indirect conclusion.

And yet again, this is exactly what pathology-based psychological theories of human behavior do!

Another approach is animal studies. In these studies it is possible to directly manipulate the amount of hormones that the organism is exposed to at any given time – while keeping the environment stable. Here we have more control over the variables, but the problem is that animals aren’t people and that hormonal effects can be species-related.

In other words, animal studies are rarely good at explaining complex human behaviour, such as job preferences.

Gender differences and ideology

As described above, there are gender differences in the brain, but it is unclear if – and how – they are connected with cognitive differences. The same must be said about the relationship between hormones and gender differences in cognition, social behaviour, and preferences.

Compared with this, there is less doubt that gender differences can be cultural products. However, given that it is difficult to establish a causality, there is still room for ideological interpretations. And that is completely without denying the facts.

BUT! These “opinions” ought to be LABELED as such: as “ideological, social, philosophical, or other belief-based  interpretations” and not presented (via weasel words and other deceptive language) as scientifically valid conclusions, which “psych-social opinion” is not. 

That is probably the only thing that can be concluded with certainty in this debate.

—————
Read this article in Danish on ForskerZonen, part of Videnskab.dk

What Happens When Social Rules Change? / Look Around

Asperger people are criticized for not being social, that is, we just don’t respond to social requirements as demanded by the multiple agencies of “social order”. As an Asperger, I recognize, perhaps more clearly and emphatically than neurotypicals, the need for “rules of the road” to be applied to billions of social humans who must “try to get along” with each other while providing enough resources to keep “everyone alive” and the slaves pulling their respective oars on the great barge of civilization.

Asperger individuals find themselves trying to understand human behavior from an early age, growing up as we do, on the deck of a heaving “Noah’s Ark” loaded with stampeding elephants, running to and fro, trampling the other animals, and trumpeting complaints that “The Flood” is all the other animals’ fault. All the other animals acknowledge that the Elephants are in charge – look how big and powerful they are; and how much water they drink! And food! There’s little left for the rest of the animals, who try to obey the orders the Elephants dish out about who gets to drink and eat; how much and when. A system tolerable by social animals, when each group and members of the group, get a decent amount to live on… but the damn elephants keep changing the “who gets what and how much” day to day, and even minute to minute.

The elephants have abandoned their “function” as leaders, charged with organizing the procurement of supplies, and the distribution of necessities, so that all the types of animals who joined the Ark, in a reproductive two by two scheme, ready to fulfill their part in the future of “Life After The Flood” (or at least to recover between what is a permanent condition of change and natural disaster as the pattern in Earth’s history) will have a “good shot” at extending the success of their species,  and of those species whose destiny is tied to theirs’ and vice versa.

Some of the Aspergers, who are caught in the melee of greed, confusion, desperation and irrational violence that has overtaken the deck of the Ark, hide wherever they can; finding refuge on the sinking barge, in out of the way nooks and crannies below deck. Others believe that they must try to join the madness on deck by “becoming” part of the insanity; others jump ship, discovering that there’s dry land “out there” that the denizens of the Ark simply can’t see.

After thousands of years of poor leadership, and billions more “animals” on the once-capacious Ark, no one sees the problem: the Social Rules no longer make sense. The rules keep changing minute to minute inside the social order. The animals are leaderless and resort to making their own rules, simply to survive the chaos. Some  groups see the opportunity to overthrow the elephants and impose their own rules on everyone. It’s the usual social response to leaderless conditions. Desperation. War as a state of mind that is acted upon with increasing frequency. Imposition of even worse tyrannical regimes.

But in physical reality, Nature’s laws have not changed, nor will they. Nature imposes the real and ultimate test of human behavior. Asperger people understand this. A Native American philosophical position was related to me by a Sioux acquaintance:

 “The white man will destroy himself; we wait, they will go away. We will have our way of life back and we are preserving our traditions, and will, for as long as it takes.”

Meanwhile, the elephants are rearranging the deck chairs on Noah’s Yacht.  

And no, I’m not picking on Republicans, but on failed leadership by the “top of” the social pyramid, which is responsible for leadership – you get the perks of power and wealth; you create order and protect your people through a COHERENT system of rules and regulation, and fairness in the application of social restrictions and consequences.

 

 

Pleistocene Animals / Population Dynamics

The Field Museum of Natural History was a “kid magnet” in my day; I’m sure it still is. I remember the Egyptian mummies, which I didn’t like to be around because the odor they gave off was repulsive. The big dioramas and paintings attracted me, especially the giant fauna of Ice Age animals, which were labeled “extinct”. Some were shown as being hunted by Paleo-Indians; it seemed preposterous (and terribly brave – or an act of desperation) for a human with a wooden spear and stone point to do such a thing. Then there were “cave” animals presented; they looked a lot like contemporary bears and big cats, but bigger and more ferocious. The word “cave” confused me. Being Asperger I took this literally: humans occupied caves, too – did they have to “evict” or kill all the animals living there before moving in? I’m still not sure, but it seems that “cave” doesn’t refer to the animal occupying the cave, but to species that humans depicted in cave paintings. (Language again! So non-specific…)

As a “real” hunting scene, this is idiotic! Wild boar are EXTREMELY dangerous animals! No one would crouch on the ground or stand a few feet in front of one of these fast charging and deadly animals… And what about those “coyote-looking” dogs on leashes? Hmmm… And all that tender white skin?

Diorama ID: Mas d’Azil cave in France. The scene shows two Azilian men armed with wooden spears with flint lance-points at close quarters with an enraged wild boar defending his mate and two young pigs. The dogs are held by rawhide straps and they are straining forward at the leash. The painted background shows the peaks of the Pyrenees in the distance.

For the story of this and other Field Museum “cave men” dioramas:

https://www.fieldmuseum.org/science/blog/what-happened-caveman-dioramas

Le Mas-d’Azil cave, southwestern France, is the typesite for the prehistoric Azilian culture. The Grotte du Mas d’Azil is a “supersite” for human habitation ca. 30,000 years ago, and is also a key site for the Magdalenian culture that preceded it. 

For lots more info & photos see also: http://donsmaps.com/masdazil.html

_________________________________________________________________________________

Full PDF at: http://www.sciencedirect.com/science/article/pii/S0960982209013062

Ecological Change, Range Fluctuations and Population Dynamics during the Pleistocene

Apart from the current human-induced climate change, the Holocene is notable for its stable climate. In contrast, the preceding age, the Pleistocene, was a time of intensive climatic fluctuations, with temperature changes of up to 15°C occurring within a few decades. These climatic changes have substantially influenced both animal and plant populations. Until recently, the prevailing opinion about the effect of these climatic fluctuations on species in Europe was that populations survived glacial maxima in southern refugia and that populations died out outside these refugia. However, some of the latest studies of modern population genetics, the fossil record and especially ancient DNA reveal a more complex picture. There is now strong evidence for additional local northern refugia for a large number of species, including both plants and animals. Furthermore, population genetic analyses using ancient DNA have shown that genetic diversity and its geographical structure changed more often and in more unpredictable ways during the Pleistocene than had been inferred. Taken together, the Pleistocene is now seen as an extremely dynamic era, with rapid and large climatic fluctuations and correspondingly variable ecology. These changes were accompanied by similarly fast and sometimes dramatic changes in population size and extensive gene flow mediated by population movements. Thus, the Pleistocene is an excellent model case for the effects of rapid climate change, as we experience at the moment, on the ecology of plants and animals.

Excerpt: Clearly, these massive climatic and environmental changes significantly influenced the distribution and genetic diversity of plants and animals. The idea that, during times of adverse climate, species track their habitat goes back to Darwin [9], and the Pleistocene should represent an excellent opportunity to test this assumption. Generally, one would assume that Arctic species would expand their distribution southwards during colder times and that temperate species would expand northwards during warmer times. While this is straightforward in North America, with mountain chains, which represent partial barriers to range shifts, running from north to south, in Europe a level of complexity is added with mountain chains running from east to west and the available land mass becoming smaller to the south and being divided into several peninsulas bordering the Mediterranean. This geography, together with numerous studies that found geographical patterns in the genetic diversity of many species consistent with colonization of mid-latitude and northern Europe from the Iberian Peninsula, Italy and the Balkans (for review, see [10,11]) has resulted in the classical ‘refugium theory’, which proposes that temperate species survived the glacial maxima in southern refugia with little gene flow among them and colonized the more northern parts from there during interglacial times. While this model is theoretically sound and correct in many aspects, recent studies on both modern and, especially, ancient DNA diversity have shown that reality is much more complex and only very broadly follows a contraction–expansion model for population dynamics, with many additional processes complicating the picture [12–16].

Finally, the end of the Pleistocene is marked by a massive extinction of large land vertebrates across most of the world (Box 1), with the exception of Africa [17]. Although these extinctions have long been known, their causes remain controversial. While some authors blame humans [18], others deny any human influence, at least on the continents, although human-induced extinctions are widely accepted for islands [19]. Again, recent research has revealed a great deal about the timing and processes of these extinctions, showing that not only mammoths [20,21], but also giant deer (deceivingly known as Irish elk) [22] and some Caribbean ground sloths [23], survived into the Holocene. However, when it comes to the cause(s) of these extinctions, the verdict is still out.

Signature Pleistocene animals.

The Arctic fox (Alopex lagopus) is a small (smaller than the red fox) white or bluish-grey fox that lives today in the arctic northern hemisphere of the Holarctic from Greenland to Iceland and the Arctic regions of North America and Eurasia. During the Pleistocene it had a much wider distribution across the middle part of Europe and western Asia as well as in the large ice-free region of Beringia. It is primarily an inhabitant of the tundra and mountainous regions above the tree line, but it does penetrate into the taiga to some degree. Arctic foxes feed primarily on lemmings, but their diet also includes Arctic hare, eggs, and carrion scavenged from the leftovers of larger predators. A remarkable characteristic is their capability for long distance dispersal, with movements up to 2,000 km.

The brown bear (Ursus arctos) had and still has by far the largest habitat range of all living bear species. Formerly, its habitat extended across North Africa, Europe, the northern and middle parts of Asia and North America from Alaska down to Mexico. Due to intensive human persecution, it is now extinct in many of these areas, including North Africa, large parts of Europe and most of North America. Brown bears are very adaptable and can live on both a mostly herbivorous diet and a mostly carnivorous diet. They are very variable in size and other morphological traits which historically has led to the description of numerous subspecies and even species. Today, all brown bears are considered a single species with a number of subspecies.

Cave bears (Ursus spelaeus) are the close — and less fortunate — cousins of the brown bear. The two species diverged some 1.6 million years ago, with tooth and stable isotope analyses indicating that cave bears were mostly herbivorous. However, recently a population was discovered that shows a stable isotope signature indicating an omnivorous, or even carnivorous, diet. Although in Europe cave bear remains are much more numerous than those of the brown bear, cave bears went extinct some 25,000 years ago. It has recently been shown that cave bears also occurred in Asia up to north-eastern Siberia.

Cave hyenas (Crocuta crocuta spelaea) are close relatives of the living spotted hyenas from Africa. In fact, in mitochondrial DNA sequence trees, sequences of cave and spotted hyenas are quite intermingled, questioning any taxonomic distinction of them as a subspecies or even as a species. Judging by cave paintings, they were probably spotted like modern spotted hyenas in Africa. They lived in Eurasia throughout the Pleistocene and probably already during the late Pliocene, about 3 million years ago. The timing of their extinction is not well established, but may have taken place around the same time as the cave bear, some 25,000 years ago.

The giant deer (Megaloceros giganteus), or Irish elk, is the gigantic relative of the rather gracile fallow deer. Giant deer are not only remarkable for their large body size but also for their huge antlers which could span up to 3.5 meters. Giant deer are often seen as typical representatives of the Pleistocene, but recent research has shown that in the Urals, giant deer survived until at least 7,700 years ago, far into the Holocene.

The woolly mammoth (Mammuthus primigenius) is no doubt the most iconic of all extinct Pleistocene animals. However, the woolly mammoth is only the last representative of a long lineage that had its origin in Africa. The first European mammoth lived in southern Europe and only later did mammoths colonize the arctic regions. Woolly mammoths differ from their closest relatives, the living elephants, in many features, most conspicuously by their curved tusks, the long hair and their small ears and short tails. Tens of thousands of mammoth bones have been recovered from the northern permafrost regions and sometimes even complete frozen carcasses. Mammoths survived into the Holocene, with the last population disappearing from Wrangel Island only about 3,700 years ago.

The steppe bison (Bison priscus) must have been a very common species throughout the Arctic region, especially in Beringia, given the vast numbers of fossils that have been found. Steppe bison were very variable in their morphology, especially with regard to the size of their horns, which were much larger in some individuals than in modern bison. They went extinct in Eurasia, but genetic analyses have established that they were the ancestor of the modern American bison, Bison bison. Their relationship to the European bison, Bison bonasus, is not known.

In this review, we will discuss the dynamics of animal and plant populations during the Pleistocene, trying to outline how populations reacted to the rapid variations in climate. We will restrict our analyses to the northern hemisphere, as the majority of studies on Pleistocene DNA have been done on species from this region.

What kids see today: Neoteny is rampant in American entertainment and education! Thank-you Hollywood for making “creationism” look legitimate!