Casting doubt on the Obsterical Dilemma / Head too big

Vital to the development of a “potential” human, is the intricate relationship between mother and fetus. There is much more going on than mechanics, both individually and in human evolution. 


Casting Doubt on a Paradigm /

the energetics-of-gestation-and-growth hypothesis.

on the work of Holly Dunsworth

The obstetrical hypothesis postulates that the demands of an unusual locomotor system (bipedalism) increase the risk and cost of the reproductive process. If this is the case, evolution would favor human birth at earlier stages of development than in other, non-bipedal primates, and mothers with wider hips would experience decreased motor efficiency. (Curious reasoning!)

The obstetrical hypothesis is neat and readily comprehended, which helps explain its widespread acceptance, but new evidence casts doubt on it. A recent paper by Holly Dunsworth of the University of Rhode Island and colleagues reexamines the predictions and evidence supporting the obstetrical hypothesis and suggests an alternative explanation. For instance, human gestation is often said to be short relative to that of other primates, based on how much more growth is needed in neonates (birth – 1 year old) to achieve adult brain size. The shorter duration of gestation on first glance supports a prediction of the obstetrical hypothesis—that birth has evolved to occur earlier in hominids so that the baby is born before its head is too large to pass through the birth canal. Actually, the duration of human pregnancy (38–40 weeks) is absolutely longer than that of chimps, gorillas, and orangutans (32 weeks for chimps and 37–38 weeks for the latter two). When Dunsworth and her colleagues took maternal body size into account, which in primates is positively correlated with gestation length, they showed that human pregnancy is also relatively longer compared to that in great apes. (we are apes!) No wonder that the third trimester seems so long to many pregnant women.

Another oft-cited fact supporting the obstetrical hypothesis is that, of all the primates, human newborns have the least-developed brains. Human babies’ brains are only 30 percent of adult size, as opposed to 40 percent in chimps. This difference in newborn brain size seems to suggest that human babies are born at an earlier developmental stage than other primates.

The catch is that adult brain size in humans is much larger than in other primates for reasons having nothing to do with birth. This means that using adult brain size as a basis for comparing relative gestation length or newborn brain size among primates will underestimate human development. But as one of the collaborators with Dunsworth, Peter Ellison of Harvard University, pointed out in his 2001 book Fertile Ground, the relevant question is,

Given how large a mother’s body size is, how big a brain can she afford to grow in her baby? It is an issue of supply and demand. Labor occurs when the mother can no longer continue to supply the baby’s nutritional and metabolic demands.

As Ellison puts it, “Birth occurs when the fetus starts to starve.” From this perspective, the brain size of a human newborn is not small for a primate but is very large—one standard deviation above the mean. Body size in human newborns is also large relative to other primates when standardized for a mother’s body size. Both facts suggest that pregnancy may push human mothers to their metabolic limits.


My two cents: I think that the ‘missing’ factor is sexual selection that has been occurring since the advent of Agriculture-Urbanization: intense selection toward juvenalization has produced childlike / tame females who are fertile at a young age, but are under-equipped physically to support sufficient gestation and childbirth. Also, agricultural products are deficient nutritionally. This “food” problem addresses both skeletal problems, metabolic problems and a modern epidemic of premature birth.


The obstetrical hypothesis, in contrast, suggests that locomotion rather than metabolism is the limiting factor in birth size. The underlying concept here is that wider-hipped women—capable of giving birth to larger offspring—should suffer a disadvantage in locomotion. But detailed studies of the cost of running and walking—including new work by Dunsworth’s coauthors Anna G. Warrener of Harvard University and Herman Pontzer of Hunter College—do not support this idea. Men and women are extremely similar in the cost and efficiency of locomotion, regardless of hip width. Enlarging the birth canal to pass a baby with a brain 40 percent of adult size, as is typical of newborn chimps, would require an increase in diameter of only three centimeters—just over an inch—in the smallest dimension of the birth canal. This wouldn’t hinder locomotion significantly, given that many women already have such broad hips. The conflict between big-brained babies and upright walking may be more conceptual than real.

What Does a Baby Cost?

Although the findings showing that human babies are not earlier than other primates are interesting, they still fail to identify what limits baby brain size. Dunsworth and her coauthors propose that the metabolic constraints faced by a mother limit the length of pregnancy and fetal growth. They have dubbed their hypothesis the energetics-of-gestation-and-growth hypothesis.

As the baby grows in both brain and body in the womb, its demand for energy accelerates exponentially. At some point, the mother reaches the limit of her ability to supply the fetus’s demands, and then labor begins. Even following birth, the big-brained, big-bodied newborn needs a loving mother who will continue to feed and care for it while its brain continues to grow at a fetal rate. In the womb, the fetus is basically part of the mother. Once born, the baby is effectively at a higher trophic level than its mother, like a parasite feeding on her, which increases the metabolic demands on her. However, the baby’s needs have shifted to include more long-chain fatty acids, which are key for brain growth. Since these are very efficiently transmitted to the baby through breast milk, rather than through the placenta, moving the baby outside the womb isn’t a problem. (Breast feeding, social rules be damned, is vital to newborns.)

The obstetrical hypothesis is not defunct; it is simply under question. But merely convincing those who were raised intellectually within this paradigm to consider an alternative hypothesis can be challenging. When she gives a talk about the energetics hypothesis, Dunsworth summarizes a conversation that illustrates this challenge:

“What always comes next is, ‘then why doesn’t the pelvis get wider to make childbirth easier?’ And my answer is always, ‘Because it’s good enough. Witness over seven billion humans on the planet.’ But that doesn’t satisfy most people who are moved to ask the question in the first place. And when they argue ‘the tight fit at birth is too much of a coincidence to ignore,’ I ask, ‘Isn’t it just a coincidence that my finger fits perfectly into my nostril?’

She’s right. Evolutionary adaptation doesn’t have to be perfect, just good enough. Perhaps the female pelvis adapted to fit the size of the human fetus’s brain, rather than the female pelvis’s limiting the baby’s brain size. Still, we are left with no clear reason why a baby is such a tight fit in the mother’s birth canal. Pelvic size may be limited by something not yet taken into account in locomotor studies, such as speed, balance, or risk of injury. Or, perhaps simple economy keeps pelvic size close to neonatal brain size. The third alternative is that human childbirth was not always difficult and has only become so as improvements in diet have increased newborn body size. (Or modern neotenic females are less robust than earlier females and less capable of carrying the fetus to complete gestation, but deliver increasingly premature infants and / or require caesarean intervention.) The obstetrical hypothesis and the energetics hypothesis are not mutually exclusive.

The evolutionary conflict that makes human birthing difficult may not be between walking or running and having babies, but between the fetus’s metabolic needs and the mother’s ability to meet them. Perhaps the problem isn’t only having—bearing—a big-brained baby. Perhaps the real problem is making one.


Does Self-Awareness Require a Complex Brain?

Aye, yai, yai. Here we go again…which definitions of consciousness and self-awareness are being discussed?

(SciAm Article after sample definitions) NOTE: The media function on my page is screwed up… can’t size or delete some images – you’ll have to search out “brain parts images” for yourself. 

From: Home » Positive Psychology Articles » What is Self-Awareness and Why Does it Matter? 

So What is Self-Awareness Exactly? / The psychological study of self-awareness can be first traced back to 1972 when Psychologists Shelley Duval and Robert Wicklund developed the theory of self-awareness.

They proposed that: “when we focus our attention on ourselves, we evaluate and compare our current behavior to our internal standards and values. We become self-conscious as objective evaluators of ourselves.”

In essence, they consider self-awareness as a major mechanism of self-control.

Sounds pretty good; a state of “owning” one’s thoughts and intentions and the recognition that one’s behavior is often not congruent with these “values”. NOT the simple act of “mirror recognition” which belongs to the brain’s “visual system”. 

Basic physical def: When you are awake and aware of your surroundings, that’s consciousness. (That “jives with” mirror recognition -type awareness as a property of an active sensory system). 

The most influential modern physical theories of consciousness (there are supernatural theories, of course) are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Consciousness – Wikipedia

It’s impossible here to present the long-standing and ever-growing confusion over the modern “concepts” of consciousness. It’s a word that is used for the most part, without any meaning whatsoever. Technology also has entered the arena. 

My own idea is this… What we commonly refer to as “being consciousness” is a social interaction, an act of Co-consciousness; the product of language : “In Western cultures verbal language is inseparable from the process of creating a conscious human being.” see previous post:



By Ferris Jabr on August 22, 2012

The computer, smartphone or other electronic device on which you are reading this article has a rudimentary brain—kind of.* (uh-oh. Pop-Sci) It has highly organized electrical circuits that store information and behave in specific, predictable ways, just like the interconnected cells in your brain. (No) On the most fundamental level, electrical circuits and neurons are made of the same stuff—atoms and their constituent elementary particles—but whereas the human brain is conscious, manmade gadgets do not know they exist. (WOW! NT nonsense!) Consciousness, most scientists argue, (made up assertion) is not a universal property of all matter in the universe. Rather, consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. (Confused yet?) (Mirror awareness is a VISUAL phenomenon)

A brain as complex as the human brain is definitely not necessary for consciousness. (!!!)

On July 7 this year, a group of neuroscientists convening at Cambridge University signed a document officially declaring that non-human animals, “including all mammals and birds, and many other creatures, including octopuses” are conscious. (Well, that’s certainly proof that some poorly-defined experiential state in humans is a “thingy” also “in mammals and birds, and many other creatures, including octopuses” !!)

Humans are more than just conscious—they are also self-aware. Scientists differ on the difference between consciousness and self-awareness, (those imaginary Science Elves again, messing us up with “tricky” non specific definitions of “consciousness and self-awareness”) but here is one common explanation: Consciousness is awareness of one’s body and one’s environment; self-awareness is recognition of that consciousness—not only understanding that one exists, but further understanding that one is aware of one’s existence. Another way of thinking about it: To be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts. Presumably, human infants are conscious—they perceive and respond to people and things around them—but they are not yet self-aware. In their first years of life, infants develop a sense of self, learn to recognize themselves in the mirror (a phenomenon of the SENSORY SYSTEM) and to distinguish their own point of view from other people’s perspectives.

Notice how a lack of distinction / definition of terms leads to the inevitable “linear-causal-but-hierarchical arrangement of “notions” assumed to be correct (that is, how the brain works as an “isolated” command center, but which are “phrases” merely strung together by “social habit”.

Numerous neuroimaging studies have suggested that thinking about ourselves, recognizing images of ourselves and reflecting on our thoughts and feelings—that is, different forms self-awareness—all involve the cerebral cortex, the outermost, intricately wrinkled part of the brain. The fact that humans have a particularly large and wrinkly cerebral cortex relative to body size supposedly explains why we seem to be more self-aware than most other animals. (This pop-sci blah, blah is unforgivable in a “science” article. 

One would expect, then, that a man missing huge portions of his cerebral cortex would lose at least some of his self-awareness. Patient R, also known as Roger, defies that expectation. Roger is a 57-year-old man who suffered extensive brain damage in 1980 after a severe bout of herpes simplex encephalitis—inflammation of the brain caused by the herpes virus. The disease destroyed most of Roger’s insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex (mPFC), all brain regions thought to be essential for self-awareness. About 10 percent of his insula remains and only one percent of his ACC.

Note that “self-awareness” in this article is the “you are awake and aware of your surroundings” definition, and not the Duval, Wickland definition.

Roger cannot remember much of what happened to him between 1970 and 1980 and he has great difficulty forming new memories. He cannot taste or smell either. But he still knows who he is—he has a sense of self. He recognizes himself in the mirror and in photographs. (This would indicate that his VISUAL system / memory is intact) To most people, Roger seems like a relatively typical man who does not act out of the ordinary. (That’s NTs for you; minimal evidence, inattentional blindness, social convention = “must be a normal person”) LOL

Carissa Philippi and David Rudrauf of the University of Iowa and their colleagues investigated the extent of Roger’s self-awareness in a series of tests. In a mirror recognition task, for example, a researcher pretended to brush something off of Roger’s nose with a tissue that concealed black eye shadow. 15 minutes later, the researcher asked Roger to look at himself in the mirror. Roger immediately rubbed away the black smudge on his nose and wondered aloud how it got there in the first place.

Philippi and Rudrauf also showed Roger photographs of himself, of people he knew and of strangers. He almost always recognized himself and never mistook someone else for himself, but he sometimes had difficulty recognizing a photo of his face when it appeared by itself on a black background, absent of hair and clothing. (Visual system)

Roger also distinguished the sensation of tickling himself from the feeling of someone else tickling him and consistently found the latter more stimulating. When one researcher asked for permission to tickler Roger’s armpits, he replied, “Got a towel?” As Philippi and Rudrauf note, Roger’s quick wit indicates that in addition to maintaining a sense of self, he adopts the perspective of others—a talent known as theory of mind. (Hmmm… a man without an insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex is capable of “mind-reading” and subtle social thinking and interaction. BUT, ASD Asperger people who have these “parts” intact, are not capable of “mind-reading” and social communication) He anticipated that the researcher would notice his sweaty armpits and used humor to preempt any awkwardness.

Just where is the “mythic social brain” located? In a textbook perhaps?

In another task, Roger had to use a computer mouse to drag a blue box from the center of a computer screen towards a green box in one of the corners of the screen. In some cases, the program gave him complete control over the blue box; in other cases, the program restricted his control. Roger easily discriminated between sessions in which he had full control and times when some other force was at work. In other words, he understood when he was and was not responsible for certain actions. (Aye, yai, yai. What a “stretchy” conclusion!) The results appear online August 22 in PLOS One.

Given the evidence of Roger’s largely intact self-awareness (visual recognition)despite his ravaged brain, Philippi, Rudrauf and their colleagues argue that the insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex (mPFC) cannot by themselves account for conscious recognition of oneself as a thinking being. (Well, congratulations!) Instead, they propose that self-awareness is a far more diffuse cognitive process, relying on many parts of the brain, including regions not located in the cerebral cortex. (Why no recognition of VISUAL processing??)

In their new study, Philippi and Rudrauf point to a fascinating review of children with hydranencephaly—a rare disorder in which fluid-filled sacs replace the brain’s cerebral hemispheres. Children with hydranencphaly are essentially missing every part of their brain except the brainstem and cerebellum and a few other structures. Holding a light near such a child’s head illuminates the skull like a jack-o-lantern. Although many children with hydranencephaly appear relatively normal at birth, they often quickly develop growth problems, seizures and impaired vision. Most die within their first year of life. In some cases, however, children with hydranencephaly live for years or even decades. Such children lack a cerebral cortex—the part of the brain thought to be most important for consciousness and self-awareness—but, as the review paper makes clear, at least some hydranencephalic children give every appearance of genuine consciousness. They respond to people and things in their environment. When someone calls, they perk up. The children smile, laugh and cry. They know the difference between familiar people and strangers. They move themselves towards objects they desire. And they prefer some kinds of music over others. If some children with hydranencephaly are conscious, then the brain does not require an intact cerebral cortex to produce consciousness. (Which “consciousness” are we discussing?)

Hydranencephaly: “conscious” by definition “awake and aware of its surroundings” – there seems to be a consistent error in equating this definition (which is true of any animal that is not “asleep, dormant, anesthetized, or comatose” and includes automatic reflexes) and being aware that one is aware, or self-awareness). 

Whether such children are truly self-aware, however, is more difficult to answer, especially as they cannot communicate with language. In D. Alan Shewmon‘s review, one child showed intense fascination with his reflection in a mirror (visual system), but it’s not clear whether he recognized his reflection as his own. Still, research on hydranencephaly and Roger’s case study indicate that self-awareness—this ostensibly sophisticated and unique cognitive process layered upon consciousness—might be more universal than we realized. (Totally ridiculous statement. Mixing simple visual recognition with Duval, Wickland definition. Still no clue as to what “consciousness” is. 


Merker B (2007) Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Behavioral and Brain Sciences 30: 63-81.

Philippi C., Feinstein J.S., Khalsa S.S., Damasio A., Tranel D., Landini G., Williford K.5, Rudrauf D. Preserved self-awareness following extensive bilateral brain damage to the insula, anterior cingulate, and medial prefrontal cortices. PLOS ONE. Aug 22.

Shewmon DA, Holmes GL, Byrne PA. Consciousness in congenitally decorticate children: developmental vegetative state as self-fulfilling prophecy. Dev Med Child Neurol. 1999 Jun;41(6):364-74.

Why Asperger Types Exist / Videos LOL

No, I’m not “diagnosing” these lecturers as Asperger, but the topics discussed are an important part of the Asperger “realm of” (supposedly) bizarre, annoying, antisocial and dangerous “obsessions.”

What would mankind do without these people?

Hey, Neurotypicals: GROW UP. It’s called SCIENCE.

Down and Dirty Primitive Hunting Technology / Videos

HUNGER: The prime motivator of human behavior and technology. Primitive tools compensate for “puny human” lack of claws, reduced olfactory sense, and other assets possessed by the competition: other hungry animals, including many much smaller than humans, had superior strength, speed, meat-or tough vegetation-tearing teeth (cooking required), protective fur, athletic ability, specialized body parts and instinctive tactics. Early humans HAD TO develop tools!

Our type of brain most likely developed as a “tool” that compensated for (and competed with) the “equipment” of other animals in particular environments. The brain as technology – think about it! LOL

Paper / Climate Effects on Birds and Mammals (That’s Us)

Despite persistent belief, both inside and outside the supposed “science / religion” boundary, that humans are “a special supernatural creation,” and therefore require magical and murky socio-supernatural explanations for our behavior, we are animals. Thanks to the work of “animal scientists” we do have access to REAL information about Homo sapiens: mammal, primate, ape. Via papers such as this, we can understand how physical parameters (not manmade social constructs) drive physiology and behavior in Homo sapiens, just as in any other mammal.   

Calculating Climate Effects on Birds and Mammals: Impacts on Biodiversity, Conservation, Population Parameters, and Global Community Structure

Integrative and Comparative Biology, Volume 40, Issue 4, 1 August 2000, Pages 597–630,


A brief history

Ever since the era of Charles Darwin biologists have been intrigued by how and why animals live where they do and what is it about their properties that makes them appear where they do, and appear in the species associations that they form. Hutchinson (1959) defined the concept of the niche. MacArthur et al. (1966), Roughgarden (1974) and many others explored aspects of how size and habitat may influence community structure. Norris (1967) and Bartlett and Gates (1967) were the first to calculate explicitly how climate affects animal heat and mass balance and the consequences for body temperature in outdoor environments. The climate space concept emerged from steady state heat and mass balance calculations and was used to explore how climates might constrain animal survival outdoors (Porter and Gates, 1969).

Those early animal models of the 1960s were limited by the lack of models for distributed heat generation internally, distributed evaporative water loss internally, and a first principles model of gut function. Batch reactor, plug flow and other models were already in existence in the chemical engineering literature (Bird et al., 1960) and it would take time for the biological community to rediscover them. Also missing were a first principles model of porous insulation for fur or feathers, an appendage model, and a general microclimate model that could use local macroclimate data to calculate the range of local microenvironments above and below ground. It became possible to estimate convection heat transfer properties knowing only the volume of an animal (Mitchell, 1976). Another useful development was the appearance of a countercurrent heat exchange model for appendages (Mitchell and Myers, 1968) and the measurement of heat transfer characteristics from animal appendage shapes (Wathen et al., 1971, 1974). It also became possible to deal with outdoor turbulence effects on convective heat transport (Kowalski and Mitchell, 1976). A general-purpose microclimate model emerged in the early 1970s (Beckman et al., 1971; Porter et al., 1973; Mitchell et al., 1975) that calculated above and below ground microclimates. The ability to deal with local environmental heterogeneity and calculate percent of thermally available habitat came later (Grant and Porter, 1992). Over time general-purpose conduction–radiation porous media models for fur appeared in the biological literature (Kowalski, 1978) and it became possible to refine and test them in a variety of habitats and on many species (Porter et al., 1994). The extension of the models to radial instead of Cartesian coordinates and the implementation of first principles fluid mechanics in the porous media (Stewart et al., 1993; Budaraju et al., 1994, 1997) added important new dimensions to the models, which could now calculate temperature and velocity profiles and therefore heat and mass transfer within the fur from basic principles. A test of the ectotherm and microclimate models to estimate a species’ survivorship, growth and reproduction at a continental scale appeared in the mid 1990s (Adolph and Porter, 1993, 1996).

Thanks to these developments and the ones reported in this paper, such as the temperature dependent behavior linked to the new thermoregulatory model, it is now possible to ask: “How does climate affect individual animals’ temperature dependent behavior and physiology and what role(s) does it play in population dynamics and community structure?” This paper attempts to address some of these questions.

We approach the problem from the perspective of a combination of heat and mass transfer engineering and specific aspects of morphology, physiology and temperature dependent behavior of individuals. We show how this interactive combination is essential to calculate preferred activity time that minimizes size specific heat/water stress.

Preferred activity time is a key link between individual energetics and population level variables of survivorship, growth and reproduction, since it impacts all three population variables. Both individual and population level effects may place constraints on community structure. At the individual level, climate at any given time and food type and quality affect the optimal body size that maximizes discretionary mass and energy, the resources needed for growth and reproduction. Climate also affects community structure by affecting individual survivorship directly (heat balance/metabolic costs) and indirectly (activity time overlap of predator and prey). Climate affects seasonal food availability, distribution of food in space and time, and the cost of foraging for that food at different times during a day. Survivorship is affected by temperature dependent behavior changes that allow animals to move to less costly microenvironments at any time. For small mammals, underground burrows or under snow tunnels provide temperatures that never stay below 0°C due to local heating effects of the animal’s metabolic heat production.

At the population level,climate plays a very important role in population numbers. Each species interacts in its own way with climate, affecting its abundance, and community structure. As Ives et al. (1999 p. 546) have pointed out

Our main result is that interspecific competition and species number have little influence on community-level variances; the variance in total community biomass depends only on how species respond to environmental fluctuations. This contrasts with arguments (Tilman and Downing, 1994; Lawton and Brown, 1993) that interspecific competition may decrease community-level variances by driving negative covariances between species abundances. We show that negative covariances are counteracted by increased species-level variances created by interspecific competition.

Consequently, assessing the effect of biodiversity on community variability should emphasize species-environment interactions and differences in species’ sensitivities to environmental fluctuations (for example, drought-tolerant species and phosphorus-limited species) (McNaughton, 1977, 1985; Frost et al., 1994). Competitive interactions are relatively unimportant except through their effects on mean abundances. We have focused on competitive communities, because much current experimental work has addressed competition among plants. Nonetheless, the same results can be shown to hold for more complex models with multiple trophic levels.

Exactly how climate variation, vegetation differences, animal morphology, and foraging behavior all interact to constrain multiple functional types’ existence as a community is still largely unknown. Very little is known about temperature dependent foraging in mammals, although this has been well studied in reptiles and insects. Quantitative consequences of functional morphology on encounter probability and food handling time also are relatively unexplored as yet in mammals.

Temporal climate variation in a locality creates the opportunity for multiple optimal body sizes over annual cycles. The spatial local variation in topography and vegetation creates multiple local climates. Thus temporal and spatial variation in climate creates opportunities for multiple functional types (sizes) to coexist as communities, because as we shall see below, different body sizes interact differently with climate. Qualitatively, this idea is not new. However with likely major shifts in global climates and the rapid global changes in land use, there is urgent need to move these qualitative ideas to a quantitative framework for protection of biodiversity, conservation biology, and a number of other applications. We focus in this paper on applications to mammals and birds.

An overview of this paper

The structure of the paper begins with an overview of how macroclimate drives microclimates, which in turn impact individual animal properties. We then show how key individual properties determine population level parameters that can be used to calculate population dynamics variables. We then illustrate how individual properties also impact on community structure, that in turn feed back to temperature dependent animal properties of individuals.

The initial overview provides a context for an analysis of the model components and their interactions in hierarchical contexts. We start with the model components from the core to the skin, then from the skin through the insulation to the environment. We demonstrate how these components collectively can define the metabolic cost to mammals ranging in size from mice to elephants. We show how the empirical mouse-to-elephant metabolic regression line for animals of different sizes changes depending upon the animal’s climate and posture.

Then we explore how changing mammal body size affects discretionary energy across all climates. Once the mammal model is explored, we repeat the process for the bird model. We demonstrate how we can estimate metabolic cost across bird sizes ranging from hummingbirds to ostriches. We show how postural changes and air temperature can alter metabolic cost estimates for birds.

Once sensitivity analyses are completed, we explore how temporal and spatial variation in global climate impact body size dependent discretionary energy assuming no food limitation and thereby place constraints on the potential combinations of body sizes (community structure) of mammals at the global scale.

Finally, we show how these models can be applied to estimate for the first time from basic principles the metabolic costs and food requirements of an endangered species of bird, the Orange-bellied Parrot of Tasmania and Australia. We show these results for body sizes ranging from hatchling to fully mature adult for a wide range of environmental conditions.


(go to original paper for text and figures; topics and some sample text follow) 

Survivorship (mortality) probability/hour

Growth and reproduction potential

Different sizes of animals

Model cross section

Inside the body

Heat generation models


Temperature regulation model

The gut

Temperature dependent feeding

Porous insulation

Fur vs. feathers

Finite elements and flow through the fur


Modeling an individual

Internal body temperature profiles

The insulation

Flow at very low wind

Scaling across mammal body sizes

Mouse to elephant metabolic rate

Mouse to elephant discretionary energy uptake

Diet effects on optimal body size

Bergmann’s Rule

These results are reminiscent of Bergmann’s rule, an empirical observation that as climates get colder, animal sizes tend to get larger. Body size increases with decreasing temperature provide the greatest advantage at small size (Steudel et al., 1994). At larger body sizes, changes in fur insulation confer a greater advantage Steudel et al., 1994). Experimental data from different types of fur on a flat plate (Scholander et al., 1950) suggested this, but animals of larger size also have thicker boundary layers. A thicker boundary layer reduces convective heat loss and simultaneously enhances radiation temperature effects (Porter and Gates, 1969). Larger animals are taller, which means exposure to greater wind speeds higher above the ground. Higher wind speed reduces boundary layer thickness and may engender greater wind penetration of the fur. A first principles fur model can separate boundary layer effects due to size and wind from fur properties effects and provide better estimates of combined effects.

Assessment of consequences of Bergmann’s rule have pointed out that larger animals have the advantage of longer fasting ability under conditions of climate or food availability stress (Morrison, 1960). However, smaller animals have the advantage of lowering body temperature and seeking much more favorable microclimates, especially underground habits in severe cold. Careful transient modeling analyses of these two strategies in the animals’ microclimates would yield a testable hypothesis of the relative benefits of these different solutions to the same problem of dealing with cold.

Of course, survival in extreme temperature events is also important in affecting community structure. However, extreme temperature survival may be overrated in terms of its effects on community structure, at least for mammals. Temperature dependent behavior and selection of microhabitats by both small and large animals can greatly reduce cold or heat stress. For example, moving under or into trees and modifying the solar and infrared radiation and wind protection they provide can change equivalent local microenvironment temperatures by 20°C or more. Underground burrows or tunneling beneath the snow can provide habitats that typically do not drop below 0°C in winter when an animal is present, due to local heat from metabolism. Photoperiod-induced temperature dependent physiology, such as hibernation or estivation is another way that mammals can persist in habitats during periods of extreme heat or cold stress and thereby maintain community structure. Birds typically opt to migrate from extremely cold habitats in winter that they occupy in the summer. By exercising temperature dependent behavioral selection of microclimates through migration, the scale of their selection movements is simply larger due to the short time and lower costs of long distance bird transport.

Scaling across bird body size

Hummingbird to ostrich metabolic rates—Air temperature effect

Global communities-climatic constraints

Figure 16 shows temporal and spatial variation in optimal body size based on discretionary mass/energy for mammals for the months of January and July on a global scale. In January (winter) in the Northern Hemisphere, the optimal sizes are larger as one moves north. Large topographic features, such as the Rocky Mountains, are also predicted to have larger animals with their optima. In the Southern Hemisphere, where it is summer, topographic features do not stand out as strongly.

In July (winter) in the Southern Hemisphere there is somewhat of a “mirror image” effect on optimal body size. However, different topographic and latitudinal features create somewhat different patterns. In general, though, the model suggests that larger animals have the advantage. In the Northern Hemisphere at the same time smaller animals should have the advantage. Large topographic features like the Tibetan plateau with its cool weather in summer still show up fairly clearly as affecting optimal body size. For clarity, variation in vegetation type and food quality were not included in these graphs.

The criteria for optimization were maximum discretionary energy uptake for a given temperature at all possible body sizes. This figure was generated from the endotherm model driven by global weather data at half-degree intervals in latitude and longitude.

The map of optimal body size is different at different seasons of the year. This suggests that climate places important constraints on what functional types can coexist in a locality. Because the environment is constantly changing, it creates a constantly changing optimal body size in any locality. Changing environments create the opportunity for multiple functional types to coexist in the same area.

What is unknown at present is over what time intervals does natural selection integrate time and environmental conditions to “choose” body size? Figure 16 represents the beginnings of the effort to understand climatic constraints on community structure from basic principles. The vegetation on the landscape is certainly a very important variable that will modify the current version of the model. The spatial and temporal distribution of available food places important additional constraints on optimal body size. These constraints include encounter probabilities, handling time, food energy value and metabolic cost to get to the food. Three of these variables are related to body size and the “packaging” and “distribution” of food on the landscape. It is clear that this construct can also be applied to species of birds to study migratory patterns and other aspects of bird ecology.

It is important to note, as one reviewer did, that “evolution may select less for optima under average daily climate cycles and more for adaptations that increase survivorship during winnowing events. At any given time a population may consist of individuals with below or above optimal body sizes, should recent history include high mortality linked to extreme climate, with availability, or predation.” These important considerations have not been added to these models yet.

Conservation application: The Orange-bellied Parrot, Neophema chrysogaster

Ontogeny of metabolic costs


Surrogates for size in modeling metabolism

Body weight is a surrogate for body radius. Posture is a surrogate for body geometry. Empirical metabolism data collected since the time of Benedict in the 1930s have related metabolic heat production to body mass. However, mass is only one of the variables that drive metabolic heat production. A key variable is the radius of the trunk of the animal, which is in turn a function of the posture. Most of the analyses of metabolic scaling in the literature that we know ignore this important aspect. Furthermore, the role of a variety of environmental variables and different types of porous insulation in modifying metabolic demand have not been predictable because of the lack of reliable quantitative models.

However, our new animal models and the microclimate model that links them to macroclimate data have changed the outlook for understanding the quantitative relationships of these variables. Fortunately, there have been some careful experiments on endotherm heat loss in wind tunnels with solar radiation. They make it possible to test these models in much more realistic settings than metabolic chambers (Bakken, 1991; Bakken and Lee, 1992; Bakken et al., 1991; Hayes and Gessaman, 1980, 1982; Rogowitz and Gessaman, 1990; Walsberg, 1988a, b, c; Walsberg and Wolf, 1995).

Climate/body size effects on biodiversity

Body size affects discretionary mass and energy intake. Growth and reproduction potential affects fitness. As Figures 11 through 15 demonstrate, body size has important impacts through geometric form and radial dimensions on energy expenditure and intake. The surrogate for these primary variables is body weight (mass). We have pointed out here how air and radiant temperature and posture can make important modifications in energy cost in different environments. These energy costs are not linear with body size. Heat transfer mechanisms are not all linear with body size and neither are temperature regulation responses. Scaling of the gut is not linear with body size, either (Calder, 1984). The combinations of these nonlinear functions result in calculations that suggest discontinuous optimal body size with temperature. This is consistent with empirical data (Brown et al., 1993; Brown and Maurer, 1987; Brown and Nicoletto, 1991; Holling, 1992; Maurer et al., 1992; Peterson et al., 1998). However, there is an important reanalysis questioning these empirical results (Siemann and Brown, 1999). Our results of climate/body size/gut modeling suggest that whether or not animal sizes are clumped in nature may depend on the digestive efficiencies of foods consumed and the locations of those foods. High quality foods suggest greater clumping, low quality foods suggest very little in the way of body size clumping (Fig. 13a–d).

Body size effects on cost of foraging: temperature dependent foraging/activity time

Body size has multiple effects on cost of foraging. It affects heat and mass balance (Figs. 12, 13, 15, and 16). Body size affects cost of locomotion, which is constrained by the respiratory and mitochondrial systems of animals, as Taylor and his colleagues have so eloquently demonstrated (Mathieu et al., 1981; Taylor et al., 1982; Weibel et al., 1991). Their studies interface very nicely with recent work on animal scaling (Enquist et al., 1998; West et al., 1997, 1999).

The work presented here explains that changes in boundary conditions, such as environmental constraints on heat and mass exchange, alter fluxes and therefore alter internal scaling requirements that must adapt to changing needs. Thus, we suggest that temperature dependent behavior may be an important response to environmental change that tends to keep the organism as close as possible to optimal function as dictated by its internal and external anatomy, thereby maximizing fitness.

Body size determines whether a species can be fossorial or not, which affects diurnal microclimates and heat and mass balances. Body size affects likelihood of predation, which can be cast as a cost of foraging (Brown et al., 1994). Body size affects competition, which alters temperature-dependent activity time, which also affects cost of foraging.

Body size effects on total annual activity time

Body size effects on total annual activity time are mediated through heat and mass exchange with the environment. The onset of heat or cold stress appears to be an important constraint in limiting activity. That is, temperatures that force skin temperatures below 3°C or conditions where evaporative water loss must be elevated to protect organism integrity are bounds on activity time that impact animal fitness.

The boundary layer thickness in the air next to the animal surface constrains mass and heat transfer from an animal. Boundary layer thickness is a function of the friction between the animal surface and the air. The amount of friction depends on the dimension of the animal, fluid and animal speed relative to each other, and fluid properties of density, viscosity and thermal conductivity. On the one hand small animals have thin boundary layers and are more responsive to convective environments than to radiant heat exchange (Porter and Gates, 1969). On the other hand, large animals have thicker boundary layers and are more sensitive to the diurnal changes in infrared radiation and solar radiation fluxes in the environment. For large animals, absorption of radiant energy is a much greater challenge, since cooling by convective heat transfer is diminished because of the thicker insulating boundary layer around the larger animal.

Body size affects competitive success, hence temperature-dependent behavior including habitat utilization, which impacts on total annual activity time.

Vegetation/body size effects on biodiversity

Vegetation modifies microclimate conditions available to animals in predictable ways. Animal body size determines where animals spend their time in the wind patterns near the ground. Figure 16 is based on empirical climate data. Those empirical data reflect how vegetation may modify local microclimates. Vegetation also affects animal energetics either by direct shading of the animals or by providing cool surfaces that radiate back to animals. Thus, by directly and indirectly affecting the animal heat fluxes, vegetation impacts optimal body size and constrains functional types that might coexist in a community.

The distribution and quality of food in space and time changes in an annual cycle. Animal food encounter probabilities, and food handling time are consequences of vegetation structure and type. The calculations used in Figure 16 do not yet incorporate various possible distributions of food of various types in the environment. Diverse food distributions have not yet been explored using our models. Food encounter probabilities and handling times, which are a key part of food intake, are only beginning to be explored. The different food types, sizes and spacing also place important constraints on the range of body sizes of animals, which can efficiently utilize them.

Body size, cost of locomotion, and home range size are also interconnected. Home range size must be a function of body size, cost of locomotion, and the foraging thermal and vegetative environment. The minimum time and cost to forage for a particular type, distribution and size of food should be calculable for a broad range of body sizes and environments.

Feathers and plumage

When we watch the development of feathers through the ontogeny of a bird, it is apparent that the down structure is very much like the extremely dense fur of some mammals. Both types of fibers emerge from single openings in the skin as multiple fibers and then “fan out” in three dimensions as multiple fibers as they grow. In so doing they extend the layer of still air above the skin (and in the insulation) substantially. The second stage of plumage development with the eruption of feathers that tend to seal off air flow even further from the skin is unique in its efficiency of cross linking elements to hold complex units together and seal out air flow. The only fur that seems even closely comparable is that of the snowshoe hare that has fur tips that are flattened like tiny shovels (Porter, unpublished data). These structures probably assist in minimizing air and snow penetration into the coat.

The restriction of feather tracts to portions of a bird’s skin provide for flexibility in opening up skin areas to much more rapid heat transfer is also unique to birds. Some mammals like polar bears have inguinal regions that are highly vascularized and lightly furred. Polar bears sometimes apply them to the snow to dissipate heat, but mammals, unlike birds, have not evolved the ability to open large areas of nearly bare skin to dissipate or absorb heat.


1. Temporal and spatial variation in physical environments impose important constraints on functional types of animals that can coexist in biological communities. These constraints are further refined locally by food diversity representing different digestive qualities.

2. Morphology, physiology, and temperature-dependent activity in animals link individual energetics to population dynamics and community structure by specifying total annual activity time and mass/energy available for growth and reproduction.

3. Porous insulation in birds at rest can be modeled with current state-of-the-art fur models. Resting birds have feather positions that tend to seal off convective transport. This creates a conduction–radiation heat transfer environment. This is simpler to calculate than an environment where three heat transfer mechanisms are all important.

4. Posture plays an important role in metabolic heat loss. This is true mainly because posture affects the radial dimension of the animal, which is a key variable in the equation governing an animal’s total heat generation requirements. Posture is typically ignored in metabolic chamber metabolism studies. The model presented here allows the calculation of the upper and lower limits of metabolic expenditure for a wide variety of climatic conditions.

5. Animal geometry and posture, insulation properties, and environmental conditions influence “thermal conductance.” Thermal conductance is a term implying a passive transport of heat through a non-heat-generating medium. Thus, it is inappropriate for describing fluxes through flesh, where heat generation is occurring. It is also inappropriate in porous media that “act alive” by absorbing solar radiation in the insulation. Thermal conductance is affected by properties and boundary conditions that can have nonlinear effects on heat transport through the medium in question. It can be useful as a descriptive concept for heat source-free systems if all of the relevant boundary conditions and properties are specified.

6. The novel thermoregulatory model in conjunction with user specifications for diurnal/nocturnal/crepuscular activity allows for estimates of activity time that are in good agreement with published data.

7. Climate/body size/gut model calculations for different food types suggest that optimal body size (maximizing discretionary mass/energy) changes with different food types and their associated digestive efficiencies and the temperature. This suggests that vegetation diversity in a locality allows for specific multiple body sizes to coexist at the same point in time. As food quality declines from high digestive efficiencies of flesh/seeds to lower digestive efficiencies of grasses/leaves, optimal body size increases, lowest survival temperature rises, and the degree of clumping predicted for species in nature declines. Land use changes that tend toward monocultures would appear to dictate that fewer species would survive as vegetation diversity declines. Global warming trends would lead to smaller optimal body sizes with no change in vegetation. However vegetation changes associated with climate warming would specify larger or smaller body sizes depending on whether vegetation digestive qualities decrease or increase respectively.

8. Application of the microclimate and endotherm models to rare or endangered species requires relatively few, easily measured data to estimate food and water requirements, potential for activity time, growth, and reproduction for a wide variety of habits. This information will be useful as an aid for identification of potential reserves/transplantation sites and modification/management of existing habitats.


From the Symposium Evolutionary Origin of Feathers presented at the Annual Meeting of the Society for Integrative and Comparative Biology, 6–10 January 1999, at Denver, Colorado.


Widespread Bias Large Genetic Studies / Implications for ASD Asperger’s

Pleiotropy: This certainly has implications for the endlessly repeated assertion that heritable genetic pathologies account for symptoms that include everything from “being antisocial” to being interested in subjects that bore neurotypicals” to female ASDs “preferring to wear clothing with lots of pockets”. It is acknowledged that ASD / Asperger’s are a highly ‘heterogeneous’ bunch of individuals; no two are alike. Claims for “discovery” of scads of “autism-linked genes” are highly suspicious to begin with, and now this unsurprising report, in which “causal” links are over- and under- estimated, or MISSED COMPLETELY.  

Source of Potential Bias Widespread in Large Genetic Studies

A new statistical method finds that many genetic variants used to determine trait-disease relationships may have additional effects that GWAS analyses don’t pick up.

By Diana Kwon | May 15, 2018

Genome-wide association studies, which scan thousands of genetic variants to identify links to a specific trait, have recently provided epidemiologists with a rich source of data. By applying Mendelian randomization, a technique that leverages an individual’s unique genetic variation to recreate randomized experiments, researchers have been able to infer the causal effect of specific risk factors on health outcomes, such as the link between elevated blood pressure and heart disease. (And all those supposed “links” between ASD / Autism “genes” and a bizarre selection / collection of “manifestations” in ASD / Asperger behavior, brain function and even in apparel choices)

The Mendelian randomization technique has long operated on the key assumption that horizontal pleiotropy, a phenomenon in which a single gene contributes to a disease through more than one pathway, is not happening. However, a new study published last month (April 23) in Nature Genetics finds that when it comes to potentially causal trait-disease relationships identified from genome-wide association studies (GWAS), pleiotropy is widespread—and may bias findings.

The “no pleiotropy” assumption was reasonable when scientists were examining only a few genes and much more was known about their specific biological functions, says Jack Bowden, a biostatistician at the University of Bristol’s MRC Integrative Epidemiology Unit in the U.K., who was not involved in the study. Nowadays, GWAS, which include many more genetic variants, are often conducted with little understanding about the precise mechanisms through which each gene could act on physiological traits, he adds.

Although researchers have suspected that pleiotropy exists in a large number of Mendelian randomization studies using GWAS datasets, “no one has actually tested how much of a problem this was,” says study coauthor Ron Do, a geneticist at the Icahn School of Medicine Mount Sinai.

To address this question, Do and his colleagues developed the so-called MR-PRESSO technique, an algorithm that identifies pleiotropy in Mendelian randomization analyses by searching for outliers in the relationship between the genetic variants’ effects on the trait of interest, say, blood pressure, and the same polymorphisms’ effects on the health outcome, such as heart disease. Outliers suggest that some genetic variants may not only be acting on the outcome through that particular trait—in other words, that pleiotropy exists. 

The team used this method to test all possible trait-disease combinations generated from 82 publicly available GWAS datasets and found that pleiotropy was present in approximately 48 percent of the 191 statistically significant causal relationships they identified. (Yes, statistics are only as good as the quality of the “thinking” of the people manipulating the process) 

When the researchers compared the Mendelian randomization results before and after correcting for pleiotropy, they discovered that pleiotropy could lead to drastic over- or underestimations of the magnitude of a trait’s influence on a disease. (And ASD / Autism is NOT A DISEASE; it’s a collection of symptoms – which have multiple sources including WESTERN socio-cultural prejudice) Approximately 10 percent of the causal associations they found were significantly distorted, and by as much as 200 percent.

For example, the team identified an outlier variant in one of the significant causal relationships they found using Mendelian randomization—a link between body mass index (BMI) and levels of C-reactive protein, a marker for inflammation and heart disease. Further examination revealed that this variant, found in a gene encoding apolipoprotein E—a protein involved in metabolism—was associated with several traits and diseases, including BMI, C-reactive protein, cholesterol levels, and Alzheimer’s disease. After removing this outlier, the effect of BMI on C-reactive protein dropped by 12 percent, still statistically significant, but obviously to a lesser degree.

“There is growing awareness that there’s widespread pleiotropy in the human genome in general, and I think these findings suggest that there needs to be rigorous analysis and careful interpretation of casual relationships when performing Mendelian randomization,” (One would have thought that this was the conservative baseline in “science-based” research) Do says. “I think what’s going to have the biggest impact is not just saying whether causal relationships exist, but actually showing that the magnitude of the causal relationship can be distorted due to pleiotropy.”

Bowden notes that the presence of pleiotropy does not mean that Mendelian randomization is necessarily a flawed technique. “Many research groups around the world are currently developing novel statistical approaches that can detect and adjust for pleiotropy, enabling you to reliability test whether a [gene] has a causal effect on an outcome,” he tells The Scientist. For example, he and his colleagues at the University of Bristol recently reported another method to identify and correct for pleiotropy in large-scale Mendelian randomization analyses. (Are these “novel statistical approaches” proven to correct a problem that has much to do with the “reductive mindset” of those who place prime value on “any positive results” for their research agenda, above scientific discipline?)

“I hope that this paper will raise people’s attention to the potential problems in the assumptions behind [these studies],” says Wei Pan, a biostatistician at the University of Minnesota who was not involved in this work. “Large genetic datasets give researchers the opportunity to use a method like this to move the field forward, and as long as they use the method carefully, they can reach meaningful conclusions.” (Is this true, or social blah, blah?)

M. Verbanck et al., “Detection of widespread horizontal pleiotropy in causal relationships inferred from Mendelian randomization between complex traits and diseases,” Nature Genet, doi:10.1038/s41588-018-0099-7, 2018.


 A chicken with the frizzle gene
© 2004 Richard Blatchford, Dept. of Animal Science UC Davis. All rights reserved. View Terms of Use


The term pleiotropy is derived from the Greek words pleio, which means “many,” and tropic, which means “affecting.” Genes that affect multiple, apparently unrelated, phenotypes are thus called pleiotropic genes Pleiotropy should not be confused with polygenic traits, in which multiple genes converge to result in a single phenotype.

Examples of Pleiotropy

In some instances of pleiotropy, the influence of the single gene may be direct. For example, if a mouse is born blind due to any number of single-gene traits (Chang et al., 2002), it is not surprising that this mouse would also do poorly in visual learning tasks. In other instances, however, a single gene might be involved in multiple pathways. For instance, consider the amino acid tyrosine. This substance is needed for general protein synthesis, and it is also a precursor for several neurotransmitters (e.g., dopamine, norepinephrine), the hormone thyroxine, and the pigment melanin. Thus, mutations in any one of the genes that affect tyrosine synthesis or metabolism may affect multiple body systems. These and other instances in which a single gene affects multiple systems and therefore has widespread phenotypic effects are referred to as indirect or secondary pleiotropy (Grüneberg, 1938; Hodgkin, 1998).

Other examples of both direct and indirect pleiotropy are described in the sections that follow.
Chickens and the Frizzle Trait

In 1936, researchers Walter Landauer and Elizabeth Upham observed that chickens that expressed the dominant frizzle gene produced feathers that curled outward rather than lying flat against their bodies (Figure 2). However, this was not the only phenotypic effect of this gene — along with producing defective feathers, the frizzle gene caused the fowl to have abnormal body temperatures, higher metabolic and blood flow rates, and greater digestive capacity. Furthermore, chickens who had this allele also laid fewer eggs than their wild-type counterparts, further highlighting the pleiotropic nature of the frizzle gene.

See article for Pigmentation and Deafness in Cats, and Antagonistic Pleiotropy and much much more on genetics….

Human Pleiotropy

As touched upon earlier in this article, there are many examples of pleiotropic genes in humans, some of which are associated with disease. For instance, Marfan syndrome is a disorder in humans in which one gene is responsible for a constellation of symptoms, including thinness, joint hypermobility, limb elongation, lens dislocation, and increased susceptibility to heart disease. Similarly, mutations in the gene that codes for transcription factor TBX5 cause the cardiac and limb defects of Holt-Oram syndrome, while mutation of the gene that codes for DNA damage repair protein NBS1 leads to microcephaly, immunodeficiency, and cancer predisposition in Nijmegen breakage syndrome.

One of the most widely cited examples of pleiotropy in humans is phenylketonuria (PKU). This disorder is caused by a deficiency of the enzyme phenylalanine hydroxylase, which is necessary to convert the essential amino acid phenylalanine to tyrosine. A defect in the single gene that codes for this enzyme therefore results in the multiple phenotypes associated with PKU, including mental retardation, eczema, and pigment defects that make affected individuals lighter skinned (Paul, 2000).

The phenotypic effects that single genes may impose in multiple systems often give us insight into the biological function of specific genes. Pleiotropic genes can also provide us valuable information regarding the evolution of different genes and gene families, as genes are “co-opted” for new purposes beyond what is believed to be their original function (Hodgkin, 1998). Quite simply, pleiotropy reflects the fact that most proteins have multiple roles in distinct cell types; thus, any genetic change that alters gene expression or function can potentially have wide-ranging effects in a variety of tissues.

Somewhat ironic, that large genetic studies REMOVE PLEIOTROPY, a “fact” in human genetics that may provide real progress in finding genetic links to physical conditions that are at present lumped together under a phony  “autistic pathology” that is based in the “social brain” of neutrotypicals – and not in scientific reality.


Sensory Biology Around the Animal Kingdom

“From detecting gravity and the Earth’s magnetic field to feeling heat and the movement of water around them, animals can do more than just see, smell, touch, taste, and hear.” 
In fact, even the most ancient sensory systems are astounding.

By The Scientist Staff | September 1, 2016

Growing up, we learn that there are five senses: sight, smell, touch, taste, and hearing. For the past five years, The Scientist has taken deep dives into each of those senses, explorations that revealed diverse mechanisms of perception and the impressive range of these senses in humans and diverse other animals. But as any biologist knows, there are more than just five senses, and it’s difficult to put a number on how many others there are. Humans’ vestibular sense, for example, detects gravity and balance through special organs in the bony labyrinth of the inner ear. Receptors in our muscles and joints inform our sense of body position. (See “Proprioception: The Sense Within.”) And around the animal kingdom, numerous other sense organs aid the perception of their worlds.

Detecting Gravity and Motion

A BALANCING ACT: Ctenophore statocysts (1), consist of a statolith composed of lithocyte cells and four compound cilia called balancers that serve as the statolith’s legs. As the animal tilts in the water, the statolith falls to the side, bending the balancers and triggering a mechanical signal to adjust the frequency of ciliary beating along the ctenophore’s eight comb plates. Other invertebrates have a more complex statocyst, in which a sphere of sensory hair cells detects the movement of a statolith floating within it (2). When the statolith falls against a hair cell, it triggers an electrical impulse that sends the information to the animal’s central nervous system.© LAURIE O’KEEFE

The ability to detect gravity and the body’s motion may be one of the most ancient senses. In vertebrates, the complex vestibular system handles this task via the otolith organs and semicircular canals of the inner ear. Invertebrates rely on a simpler structure known as a statocyst to sense their own movement and body position relative to the Earth’s gravitational pull. Even comb jellies (ctenophores), which may have been the first multicellular animals to evolve, have a rudimentary statocyst—essentially, a weight resting on four springs that bend when the organism tilts in the water.

The comb jelly’s single statocyst sits at the animal’s uppermost tip, under a transparent dome of fused cilia. A mass of cells called lithocytes, each containing a large, membrane-bound concretion of minerals, forms a statolith, which sits atop four columns called balancers, each made up of 150–200 sensory cilia. As the organism tilts, the statolith falls towards the Earth’s core, bending the balancers. Each balancer is linked to two rows of the ctenophore’s eight comb plates, from which extend hundreds of thousands of cilia that beat together as a unit to propel the animal. As the balancers bend, they adjust the frequency of ciliary beating in their associated comb plates. “They’re the pacemakers for the beating of the locomotor cilia,” says Sidney Tamm, a researcher at the Marine Biological Laboratory in Woods Hole, Massachusetts, who has detailed the structure and function of the ctenophore statocyst (Biol Bull, 227:7-18, 2014; Biol Bull, 229:173-84, 2015).

Sensing gravity’s pull and the subsequent ciliary response is entirely mechanical, Tamm notes—no nerves are involved in ctenophore statocyst function. Most other animals with statocyst sensing, on the other hand, do employ a nervous system. Statocysts exist in diverse invertebrate species, from flatworms to bivalves to cephalopods. Although the details of the statocyst’s architecture vary greatly across these different groups, it is generally a balloon-shape structure with a statolith in the center and sensory hair cells around the perimeter. As the statolith, which can be cell-based as in the ctenophore or a noncellular mineralized mass, falls against one side of the sac, it triggers those hair cells to initiate a nervous impulse that travels to the brain.

The complexity of the statocyst system appears to correlate with the complexity of a species’ movement and behavior, says Heike Neumeister, a researcher at the City University of New York. Squids and octopuses, which move rapidly around in three-dimensional space, for example, have highly adapted equilibrium receptor organs. Likewise, the nautilus, whose relatives were among the first animals to leave the bottom of the ocean and begin swimming and employing buoyancy, has a fairly advanced system. Each of its two statocysts is able to detect not only gravity, like the ctenophore’s, but angular accelerations as well, like those of octopuses, squids, and cuttlefishes (Phil Trans R Soc Lond B, 352:1565-88, 1997). “[Nautilus] statocysts are an intermediate state of evolution between simpler mollusks and modern cephalopods,” says Neumeister.

These sensory systems may be damaged by the man-made noise now resonating throughout the world’s oceans. Michel André, a bioacoustics researcher at the Polytechnic University of Catalonia in Barcelona, Spain, started looking into the effects of noise pollution on cephalopods after the number of giant squid washing ashore along the west coast of Spain shot up in 2001 and then again in 2003. “The postmortem analysis couldn’t reveal the causes of the death,” recalls André. Nearby, however, researchers were conducting ocean seismic surveys, using pulses of high-intensity, low-frequency sound to map the ocean floor. Although, these animals don’t have ears, André and others wondered if that noise might be affecting the squids’ sense of balance.

Sure enough, exposing squid, octopuses, and cuttlefish to low-frequency sound, which caused the animals’ whole bodies to vibrate, universally resulted in damage to their statocysts. Hair cells were ruptured or missing; the statocysts themselves sometimes had lesions or holes; even the associated nerve fibers suffered damage. As a result, the animals became disoriented, often floating to the water’s surface (Front Ecol Environ, doi:10.1890/100124, 2011). “They eventually died because they were not eating,” says André. “I don’t think that [anyone thought] that animals who could not hear would be suffering from acoustic trauma. . . . This is something we have to be concerned about.”

—Jef Akst

Feeling the Flow


Light, sound, and odors travel through water very differently than they do in air. Accordingly, aquatic animals have sensory systems tuned to their fluid medium—most notably, the lateral line system. Observable as distinct pores that run along the flanks and dot the heads of more than 30,000 fish species, the lateral line is composed of mechanoreceptors called neuromasts—clusters of hair cells not unlike those found in the mammalian ear and vestibular system—that relay information about the velocity and acceleration of water flow.

“If you live underwater, the water is often moving with respect to your body, and it’s carrying the environment with it,” says University of California, Irvine, biologist Matt McHenry, who studies the lateral line sense in fish. “To have some sense of where it’s going and how fast it’s going seems pretty fundamental. It makes a lot of sense that they would be tuned in to flow.” Despite more than 100 years of research on the lateral line, however, many questions remain about its structure and function, how the sense relays information to the nervous system, and how it affects fish behavior.

NEUROMASTS: Clusters of hair cells project from the surface of a larval zebrafish’s skin to sense the water’s movement. (Cupula has been removed.)JURGEN BERGER/MAX PLANCK INSTITUTE FOR DEVELOPMENTAL BIOLOGY

Transparent zebrafish larvae, whose surface lateral line structures can be observed without the need for dissection, are starting to yield answers. Using a high-power microscope, Jimmy Liao of the University of Florida’s Whitney Laboratory for Marine Science and his colleagues attach tiny glass probes to individual neuromasts and stimulate the mechanoreceptors with controlled vibrations. “We’re able to tickle an individual neuromast and record from the neuron that innervates that specific cluster,” he says. With this system, Liao’s team has found that a neuromast’s response to different water velocities depends on its position in space (J Neurophysiol, 112:1329-39, 2014). “If you bend [a neuromast] halfway and then give it a velocity, that’s very different than just giving it the velocity in its normal configuration,” Liao says.

Liao and his collaborators have also determined that the sensors stimulate sensory neurons in a nonlinear fashion—that is, with increasing velocity, the nervous response only increases up to a certain point, then levels off (J Neurophysiol, 113:657-68, 2015). And the researchers have traced the nervous connections from neuromasts found on the flank of a fish’s body to specific locations within the posterior lateral line ganglion, a group of nerve cells outside the brain. Tail neuromasts are connected to afferent neurons found in the center of the ganglion, Liao says, while neuromasts closer to the head contact neurons on its periphery.

When it comes to the specific role of lateral line sensing in fish behavior, however, the research is still somewhat murky. “We have a very crude understanding for what behaviors depend on this sense,” says McHenry. “At a receptor level, I think we have a pretty good handle for what kind of information they’re extracting, but in real-world applications it’s not clear why that’s useful a lot of the time.”

HAIRLINE: Modified epithelial cells called hair cells—similar to those in the mammalian inner ear—are the work horses of the lateral line in fishes. Hair cells connect to afferent neurons and are grouped together into structures called neuromasts whose hairs are covered by a jelly-like secretion called the cupula. When moving water or vibrations trigger neuromasts, which sit inside pores on the head, body, and tail of the fish, hair cells stimulate neurons to relay information about velocity or acceleration to sensory ganglia distributed through the fish’s body. © LAURIE O’KEEFE

One challenge is isolating sensory information detected by the lateral line from information detected by other fish senses, specifically vision, says Sheryl Coombs, an emeritus professor at Bowling Green State University who has spent decades studying the links between the lateral line sense and fish behavior. “Most behaviors rely on animals integrating information across the senses,” she says. “It’s difficult sometimes to pick apart the role of the lateral line because the senses act together in complementary ways, often.”

To get around this problem, Coombs has studied nocturnal fish and species that live in complete darkness, such as the Mexican blind cave fish (Astyanax mexicanus), which often lacks eyes altogether. In this species, Coombs has found that the fish may use their lateral line sense to construct rudimentary maps of their surroundings. “They’re basically ‘listening’—for lack of a better word—to their own flow field that they create by moving through the water,” she says. “They create the flow, and then they’re listening to distortions in that flow created by the presence of the obstacle. It’s sort of analogous to echolocation in the sense that animals are producing a sound and they’re listening to how the sound bounces back.”

—Bob Grant


Mollusks, insects, birds, and some mammals are able to sense Earth’s magnetic field, but how they do so remains a mystery. In the last couple of decades, “most of the research [has focused] on proteins and genetics in the various animals, speculating on possible means of magnetoreception,” says Roswitha Wiltschko, who—along with her husband, Wolfgang Wiltschko—ran a magnetoreception lab at Goethe University Frankfurt, Germany, until she retired in 2012.

Although the details are still unclear, most magnetoreception researchers have converged upon two key mechanisms: one based on magnetite, an iron oxide found in magnetotactic bacteria, mollusk teeth, and bird beaks; and the other on cryptochromes, blue-light photoreceptors first identified in Arabidopsis that are known to mediate a variety of light-related responses in plants and animals.

Once we have found magnetoreception structures reliably, we can start trying to understand how they convert the magnetic field into a neural response.—Roswitha Winklhofer
Goethe University Frankfurt

In 2001, Michael Winklhofer, then at Ludwig Maximilian University of Munich, and colleagues reported their identification of magnetite in the beaks of homing pigeons (Eur J Mineral, 13:659-69). A year earlier, Klaus Schulten of the University of Illinois at Urbana-Champaign and colleagues proposed that cryptochromes in the bird eye might also play a role in avian magnetoreception (Biophys J, 78:707-18, 2000). Specifically, the authors suggested that photoactivated cryptochromes form a pair of charged radicals, which are thought to affect a bird’s sensitivity to light. Schulten and his colleagues speculated that Earth’s magnetic fields could somehow affect these cryptochrome reactions in a way that would alter the bird’s visual system, providing information about its orientation. (See “A Sense of Mystery,” The Scientist, August 2013.)


Over the years, support for this idea has emerged. In 2007, Henrik Mouritsen of the University of Oldenburg, Germany, and colleagues showed that blue light–exposed avian cryptochrome 1a indeed forms long-lived radical pairs (PLOS ONE, 2:e1106). And this April, Peter Hore of the University of Oxford and colleagues published a computer-based modeling study showing that light-dependent chemical reactions in cryptochrome proteins in the eyes of migratory birds could “account for the high precision with which birds are able to detect the direction of the Earth’s magnetic field,” the authors wrote (PNAS, 113:4634-39, 2016).

MAGNETO: There are two prominent ideas for how some animals are able to detect the Earth’s magnetic fields. Diverse species have magnetite-containing cells, which are thought to be innervated and contain surface ion channels that are physically pulled open as magnetic fields tug on magnetite tethered to the cell membrane or the channel itself (top). Birds and possibly other animals also appear to “see” magnetic fields through the visual system. Proteins called cryptochromes that reside in the eye’s cones can form a pair of radicals (unpaired electrons), whose spin is affected by magnetic fields and may affect the animal’s sensitivity to light (above). © LAURIE O’KEEFE

Birds seem to use both the magnetite and the radical pair/cryptochrome–based mechanisms. Cryptochrome-based orientation has also been reported in Drosophila and cockroaches, and researchers have found evidence of magnetite-based navigation in animals from mollusks to honeybees. And there may be other components of magnetoreception still to discover, as scientists continue their search for magnetic sensory structures across the animal kingdom. Late last year, for example, biophysicist Can Xie of Peking University in Beijing and colleagues identified a Drosophila protein, dubbed MagR, that—when bound to photosensitive Cry—has a permanent magnetic moment, the researchers reported, meaning it spontaneously aligns with magnetic fields (Nat Mater, 15:217-26, 2015). The MagR/Cry complex, the researchers noted, exhibits properties of both magnetite-based and photochemical magnetoreception. (See “Biological Compass,” The Scientist, November 2015). The study was met with skepticism, however, and the results have yet to be independently verified.

In addition to mechanism, questions remain about the function of magnetoreceptive capabilities. “Once we have found [magnetoreception structures] reliably, we can start trying to understand how they convert the magnetic field into a neural response, and at the brain level, how are the single responses processed and integrated with other navigational information to tell the animal where it is and where to go,” says Winklhofer.

In the mid-1990s, for example, Wiltschko and her husband Wolfgang demonstrated that migratory birds called silvereyes (Zosterops lateralis) reacted to a strong magnetic pulse by shifting their orientations 90° clockwise, returning to their original headings around a week later (Experientia, 50:697-700, 1994). Magnetic field manipulations can also affect Drosophila navigation, John Phillips, now of Virginia Tech, has shown (J Comp Physiol A, 172:303-08, 1993). And Richard Holland, now of Bangor University, U.K., and colleagues showed in the mid-2000s that experimentally shifting the Earth’s magnetic field altered homing behavior in Eptesicus fuscus bats (Nature, 444:702, 2006).

“Some animals use their magnetic sense for long-distance navigation, some for magnetic alignment or orientation, and some animals may have the capability to sense the magnetic field but do nothing,” says Xie. Or, at least, nothing that has yet been recognized by researchers.

—Tracy Vence


HEAT SENSE: Pit organs consist of a large, hollow, air-filled outer chamber and a smaller inner chamber separated by a membrane embedded with heat-sensitive receptors. The receptors are innervated by the trigeminal ganglia (TG), which transmit the infrared signals to the brain. © LAURIE O’KEEFE

Many animals are able to sense heat in the environment, but vampire bats and several types of snakes are the only vertebrates known to have highly specialized systems for doing so. Humans and other mammals sense external temperature with heat-sensitive nerve fibers, but pit vipers, boa constrictors, and pythons have evolved organs in their faces that the animals use to detect infrared (IR) energy emitted by prey and to select ecological niches. And vampire bats have IR receptors on their noses that let them home in on the most blood-laden veins in their prey.

“Infrared sense is basically a souped-up [version] of thermoreception in humans,” says David Julius, a professor and chair of the physiology department at the University of California, San Francisco (UCSF), who studies this sense in snakes. The difference is, snakes and vampire bats “have a very specialized anatomical apparatus to measure heat,” he says.

These IR-sensing apparatuses, known as pit organs, have evolved at least twice in the snake world—once in the ancient family that includes pythons and boas (family Boidae) and once in the pit vipers (subfamily Crotalinae), which includes rattlesnakes. Pythons and boas have three or more simple pits between scales on their upper and sometimes lower lips; each pit consists of a membrane that is lined with heat-sensitive receptors innervated by the trigeminal nerve. Pit vipers, by contrast, typically have one large, deep pit on either side of their heads, and the structure is more complex, lined with a richly vascularized membrane covering an air-filled chamber that directs heat onto the IR-sensitive tissue. This geometry maximizes heat absorption, Julius notes, and also ensures efficient cooling of the pit, which reduces thermal afterimages.

In 2010, Julius and Elena Gracheva, now at Yale University, identified the heat-sensitive ion channel TRPA1 (transient receptor potential cation channel A1) that triggers the trigeminal nerve signal in both groups of snakes (Nature, 464:1006-11). The same channels in humans are activated by chemical irritants such as mustard oil or by acid, and the resulting signal is similar to those produced by wounds on the skin, Gracheva says. In snakes, these channels have mutated to become sensitive to heat as well.

Vampire bats—which, true to their name, feed on the blood of other creatures—are the only mammals known to have a highly developed infrared sense. Like snakes, the bats have an innervated epithelial pit, which is located in a membrane on the bats’ noses. In 2011, Julius, Gracheva, and their colleagues identified the key heat-sensitive ion channel in vampire bats as TRPV1 (Nature, 476:88-91). In humans, this channel is normally triggered by temperatures above 43 °C, but in the bats, it is activated at 30 °C, the researchers found.

More than 30 years ago biologists Peter Hartline, now of New England Biolabs in Ipswich, Massachusetts, and Eric Newman, now at the University of Minnesota, found that information from the snake pit organ activates a brain region called the optic tectum (known in mammals as the superior colliculus), which is known to process visual input (Science, 213:789-91, 1981). The pit organ appears to act like a pinhole camera for infrared light, producing an IR image, Newman says. However, it’s impossible to know whether snakes actually “see” in infrared.

“Unfortunately we don’t have a sensory map [of the brain] in snakes or vampire bats,” Gracheva agrees. “I don’t think we have enough data to say [these animals] can superimpose a sensory picture onto the visual picture, though it definitely would make sense.”

—Tanya Lewis


Apullae of Lorenzini openings in great white shark skin © GARY BELL/OCEANWIDEIMAGES

Sharks and other fish are well known for their ability to detect electric fields, with some species able to sense fields as weak as a few nanovolts per centimeter—several million times more sensitive than humans. But it turns out that they aren’t the only ones. In recent years, evidence for electroreception has been accumulating all over the animal kingdom: in monotremes (such as the platypus), crayfish, dolphins, and, most recently, bees.

“The number of taxa that are now effectively known to detect weak electric fields is increasing,” says Shaun Collin of the University of Western Australia, “although some of these we don’t know very much about yet, and for some we only have evidence of a behavioral response.”

ELECTRIC SLIDE: Sharks and other cartilaginous fish have highly specialized electroreceptive organs called the ampullae of Lorenzini. These bundles of sensory cells, situated at the end of jelly-filled pores in the skin, detect electric fields in the water surrounding the fish and send signals to the brain. © LAURIE O’KEEFE

First formally described in the middle of the last century in weakly electric fish (J Exp Biol, 35:451-86, 1958), electroreception operates most effectively over less than half a meter in water—a more conductive medium than air. The sense is most frequently employed by aquatic or semi-aquatic animals to find prey in environments where other senses are less reliable—in murky or turbid water, for example, or where food can bury itself in sediment. Such “electrolocation” is usually passive, relying on bioelectric fields generated by the nerves and muscles of other animals, but some species, such as knifefish, measure distortions in electric fields that they themselves generate.

Researchers have also documented other functions of electroreception. “Especially in the stingray family, it is used in social communication,” says Collin. “The opposite sex can use it to assess whether there’s a potential for mating, and discriminate that opportunity from something that could turn into predation.” And some baby sharks appear to use electroreception for predator aversion. According to research by Collin’s group, electric fields trigger a “freeze” response in bamboo sharks while they’re still in egg sacs (PLOS ONE, doi:10.1371/journal.pone.0052551, 2013).

Electroreception is thought to be an ancestral trait among vertebrates that has subsequently been lost from several lineages (including the amniotes—the group comprising reptiles, birds, and mammals), and then re-evolved independently at least twice in teleost fish and once in monotremes. In 2011, researchers added cetaceans to that list, after discovering electroreception in the Guiana dolphin, a resident of murky coastal waters around South America that evolved its electroreceptors from what used to be whiskers (Proc R Soc B, doi:10.1098/rspb.2011.1127).

Most electroreceptors consist of modified hair cells with voltage-sensitive protein channels, arranged in bundles that activate nerves leading to the brain. “The classic example is the ampullae of Lorenzini,” says Collin. Described in 1678 by Italian anatomist Stefano Lorenzini, ampullae are extensions of the lateral line system that are present in dense clusters over the heads of cartilaginous fish such as sharks and rays. Each ampulla consists of a bundle of electrosensory cells at the end of a pore filled with a hydrogel that was recently shown to have the highest reported proton conductivity of any known biological material (Sci Advances, 2:e1600112, 2016).

But pinning down how any of these receptors operate at a molecular level remains a challenge, notes Clare Baker, a neuroscientist at the University of Cambridge. “We hardly know anything about the specific genes involved, or the genetic basis for building electroreceptors in the embryo,” she says, adding that the major animal models in fish and amphibians—zebrafish Danio rerio and frog genus Xenopus—both belong to lineages that have lost electroreception altogether.

Baker’s group has adopted the paddlefish, a relative of the sturgeon, as a model organism. Electrosensitivity in these animals, as in other primitive vertebrates such as the axolotl, depends on modified hair cells that develop as part of the ancestral lateral line system and are homologous to the ampullary organs of sharks. Fate-mapping experiments in these species have identified candidate genes for electroreceptor development (Evol Dev, 14:277-85, 2012), and Baker says future work will use gene-editing technologies such as CRISPR-Cas9 to get a better grip on these genes’ functions.

Meanwhile, the field is continuing to uncover surprises. In 2013, research from Daniel Robert’s group at the University of Bristol showed that bumblebees are capable of detecting the weak electric fields generated by flowers, and use this information to discriminate between food sources of differing quality (Science, 340:66-69). And earlier this year, the same researchers identified bees’ electrosensors as tiny hairs that move in the presence of electric fields (PNAS, 113:7261-65, 2016). “Electroreception provides another source of information,” says Robert, who suspects that a flower’s electric field may indicate to bees when nectar and pollen are available. “They’re really good at learning where the resources are.”

For Collin, the Bristol team’s findings are indicative of how much more there is still to discover about electroreception. Even in large clades such as reptiles and birds, “there is circumstantial evidence that they might have electroreception, but there hasn’t been anything concrete,” he says. “There may well still be examples of functions we don’t even know about.”

—Catherine Offord


Video Lecture / Time, the brain and visual processing – wild reality

As an Asperger (?) or visual thinker, my attention to time is highly variable; when concentrating on a “visual” object or scene, time does not seem to exist. Time “markers” (these really are social in origin) such as calendars, schedules, appointments, fixed places and dates in time, are irritating interruptions to this highly pleasant lack of “feeling” for time. When these social “markers” are inevitable, as many are, I don’t feel well; anxiety may accompany the commitment to “be there” “show up” “put in an appearance.” This “regulation of time” by social entities feels alien.

My experience of the natural environment is fluid; determined by sensory “cues” – light, the motion of the atmosphere, color changes, sounds that merge and pass smoothly. The “human environment” is by contrast, incoherent; abrupt interruptions of sound, artificial light, space confined by walls and obstacles, jagged stop and go movement, no “time” to “enjoy” the senses. No peace.

In short, when in a natural environment I am within the ‘time sense’ of that environment; sensory embedded-ness, might be a description. In a human environment (except those few highly aesthetically conscious spaces), the sensory input is simply “all wrong”.

Thermodynamic Function of Life / Darwinian Theory Questioned

K. Michaelian, Instituto de Física, Universidad Nacional Autónoma de México Cto. de la Investigación Científica Cuidad Universitaria, Mexico D.F., C.P. 04510

Equations that govern physical reality.


Darwin suggested that life was at the mercy of the forces of Nature and would necessarily adapt through natural selection to the demands of the external environmental. However, it has since become apparent that life plays a pivotal role in altering its physical environment (Lovelock, 1988) and what once appeared to be biotic evolution in response to abiotic pressure is now seen as coevolution of the biotic together with the abiotic to greater levels of complexity, stability, and entropy production (Ulanowicz and Hannon, 1987). Such an understanding, difficult to reconcile within traditional Darwinian theory, fits perfectly well within the framework of non-equilibrium thermodynamics in which dissipative processes spontaneously arise and coevolve in such a manner so as to increase the entropy production of the system plus its environment (Prigogine, 1972, Ulanowicz and Hannon, 1987, Swenson, 1989, Kleidon and Lorenz, 2005, Michaelian, 2005, Michaelian, 2009a).

Life is found everywhere on Earth. On the surface, the components of greatest biomass are the archea, prokaryote, and eukaryote life based on photosynthesis. In the sea, photosynthetic phytoplankton (archea, diatoms, cyanobacteria, and dinoflagallates) can be found in great density (up to 109/ml at the surface) in the euphotic zone which extends to a depth of 50 meters. Almost all photosynthesis ends at the bottom of the Epipelagic zone at about 200 m. Approaching these depths, special pigments are needed to utilize the only faint blue light that can penetrate. On land, diatoms, cyanobacteria, and plants, which evolved from ocean cyanobacteria some 470 million years ago (Wellman and Gray, 2000; Raven and Edwards, 2001), cover almost every available area, becoming sparse only where conditions are extremely harsh, particularly where liquid water is scarce. Photosynthesizing cyanobacteria have been found thriving in hotsprings at over 70 °C (Whitton and Potts, 2000) and on mountain glaciers and Antarctic ice (Parker et al., 1982) where absorption of solar radiation and its dissipation into heat by organic and lithogenic material produces the vital liquid water, even deep within the ice (Priscu et al., 2005).
The thermodynamic driving force for the process of photosynthesis that sustains surface life derives from the low entropy of sunlight and the second law of thermodynamics. Only twenty seven years after Darwin’s publication of the theory of evolution through natural selection, Boltzmann (1886) wrote: “The general struggle for existence of animate beings is therefore not a struggle for raw materials – nor for energy which exists in plenty in any body in the form of heat — but a struggle for entropy, which becomes available through the transition of energy from the hot sun to the cold earth”.


In photosynthesis, high-energy photons in the visible region of the Sun’s spectrum are converted by the chloroplasts into low energy photons in the infrared region. Part of the free energy made available in the process is utilized to maintain and propagate life. In this manner, photosynthetic life obtains its sustenance through the conversion of the low entropy of sunlight into the higher entropy of heat and thereby contributes to the positive entropy production of the Earth as a whole.

However, the proportion of the Sun’s light spectrum utilized in photosynthesis is small and thus the entropy producing potential of photosynthesis is small. Gates (1980) has estimated that the percentage of available (free) energy in solar radiation that shows up in the net primary production of the biosphere is less than 0.1%. Respiration consumes a similarly small quantity (Gates, 1980). Of all the irreversible processes performed by living organisms, the process generating by far the greatest amount of entropy (consuming the greatest amount of free energy) is the absorption of sunlight by organic molecules in the presence of water leading to evapotranspiration. Great quantities of water are absorbed by the root systems of plants and brought upwards to the leaves and then evaporated into the atmosphere. More than 90% of the free energy available in the sunlight captured by the leaves of plants is used in transpiration. In the oceans, phytoplankton within the euphotic zone absorb sunlight and transform it into heat that can be efficiently absorbed by the water. The temperature of the ocean surface is thereby raised by phytoplankton (Kahru et al., 1993) leading to increased evaporation, thereby promoting the water cycle.
There appears to be no important physiological need for the vast amount of transpiration carried out by land plants. It is known that only 3% of the water transpired by plants is used in photosynthesis and metabolism. In fact, most plants can grow normally under laboratory conditions of 100% humidity, at which the vapor pressure in the stoma of the leaves must be less than or equal to that of the atmosphere, and therefore transpiration is necessarily zero (Hernández Candia, 2009). Transpiration has often been considered as an unfortunate by-product of the process of photosynthesis in which water is unavoidably given off through the stoma of plants which are open in order to exchange CO2 and O2 with the atmosphere (Gates, 1980). Plants consist of up to 90% water by mass and thus appear to expose themselves to great risk of drying by transpiring so much water. Others have argued
that transpiration is useful to plants in that it helps to cool its leaves to a temperature optimal for photosynthesis. Such an explanation, however, is not convincing since Nature has produced examples of efficient photosynthesis at temperatures of up to 70 °C (Whitton and Potts, 2000). In any case, there exists other simpler and less free energy demanding strategies to reduce leaf temperature such as smaller or less photo-absorbent leaves. On the contrary, the evolutionary record indicates that plants and phytoplankton have evolved new pigments to absorb ever more completely the Sun’s spectrum. Dense pine forests appear black in the midday sun. Most plants appear green, not so much for lack of absorption at these wavelengths, as for the fact that the spectral response of human eyes peaks precisely at these wavelengths (Chang, 2000).

Transpiration is in fact extremely free energy intensive and, according to Darwinian Theory, such a process, with little direct utility to the plant, should have been eliminated or suppressed through natural selection. Plants which are able to take in CO2 while reducing water loss, by either opening their stoma only at night (CAM photosynthesis), or by reducing photorespiration (C4 photosynthesis, see below), indeed have evolved 32 and 9 million years ago respectively (Osborne and Freckleton, 2009). However, the water conserving photosynthesis has not displaced the older, heavily transpiring C3 photosynthesis which is still relevant for 95% of the biomass of Earth. Instead, new ecological niches in water scarce areas have opened up for the CAM and C4 plants, as, for example, the cacti of deserts.

All irreversible processes, including living systems, arise and persist to produce entropy. This is not incidental, but rather a fundamental principle of Nature. Excessive transpiration has not been eliminated from plants, despite the extraordinary free energy costs, precisely because the basic thermodynamic function of a plant is to increase the global entropy production of the Earth and this is achieved by dissipating high energy photons in the presence of water and thereby augmenting the global water cycle.

The Water Cycle Absorption of sunlight in the leaves of plants may increase their temperature by as much as 20°C over that of the ambient air (Gates, 1980). This leads to an increase of the H2O vapor pressure inside the cavities of the leaf with respect to that of the colder surrounding air. H2O vapor diffuses across this gradient of chemical potential from the wet mesophyll cell walls (containing the chloroplasts), through the intercellular cavities, and finally through the stoma and into the external atmosphere. There is also a parallel, but less efficient, circuit for diffusion of H2O vapor in leaves through the cuticle, providing up to 10% more transpiration (Gates, 1980). The H2O chemical potential of the air at the leaf surface itself depends on the ambient relative humidity and temperature, and thus on such factors as the local wind speed and insolation. Diffusion of H2O vapor into the atmosphere causes a drop in the water potential inside the leaf which provides the force to draw up new water from the root system of the plants.

Evaporation from moist turf (dense cut grass) can reach 80% of that of a natural water surface such as a lake (Gates, 1980), while that of a tropical forest can often surpass by 200% that of such a water surface (Michaelian, 2009b). Single trees in the Amazon rain forest have been measured to evaporate as much as 1180 liters/day (Wullschleger et al., 1998). This is principally due to the much larger surface area for evaporation that a tree offers with all of its leaves. Natural water surfaces, in turn, evaporate approximately 130% of distilled water surfaces due to the increased UV and visible photon absorption at the surface as a result of phytoplankton and other suspended organic materials, including a large component (up to 109/ml at the surface) of viral and dissolved DNA resulting from viral lysing of bacteria (Wommack and Colwell, 2000).

The water vapor transpired by the leaves, or evaporated by the phytoplankton, rises in the atmosphere, because water vapor at 0.804 g/l is less dense than dry air at 1.27 g/l, to a height corresponding to a temperature of about 259 K (-14 °C) (Newell et al., 1974) at which it condenses around suspended microscopic particles forming clouds. Over oceans, an important constituent of these microscopic particles acting as seeds of condensation are the sulfate aerosols produced by the oxidation of dimethylsulfide released by the phytoplankton themselves (Charlson et al., 1987). Condensation of the water releases an amount of latent heat of condensation ( 6 10427.2 × J /kg) into the upper atmosphere, much of which is then radiated into outer space at infrared wavelengths. In this manner, the Earth maintains its energy balance with space; the total energy incident on the biosphere in the form of sunlight is approximately equal to the total energy radiated by the biosphere into space at infrared wavelengths. Energy is conserved while the entropy of the Universe is augmented in the process.

The formation of clouds may at first consideration seem to have a detrimental effect on the water cycle since cloud cover on Earth reflects approximately 20% of light in the visible region of the Sun’s spectrum (Pidwirny and Budicova, 2008), thereby reducing the potential for evaporation. However, evapotranspiration is a strong function of the local relative humidity of the air around the leaves of plants or above the surface of the oceans. By producing regions of local cooling during the day on the Earth’s surface, clouds are able to maintain the average wind speed at the Earth’s surface within dense vegetation (see for example, Speck (2003)) at values above the threshold of 0.25 m/s required to make the boundary-layer resistance to water loss almost negligible in a plant leaf, thus procuring maximal transpiration (Gates, 1980).
Sublimation and ablation of ice over the polar regions, promoted in part by photon absorption of cyanobacteria within the ice, is also important to the water cycle, evaporating up to 30 cm of ice per year (Priscu et al., 2005).

Production of Entropy

The driving force of all irreversible processes, including the water cycle, is the production of entropy. The basic entropy producing process occurring on Earth is the absorption and dissipation of high energy photons to low energy photons, facilitated in part by the plants and cyanobacteria in the presence of water.

Much math / physics / chemistry / geology / planetary geology skipped: go to original. 

The Importance of Life to the Water Cycle

The very existence of liquid water on Earth can be attributed to the existence of life. Through mechanisms related to the regulation of atmospheric carbon dioxide first espoused in the Gaia hypothesis (Lovelock, 1988), life is able to maintain the temperature of the Earth within the narrow region required for liquid water, even though the amount of radiation from the Sun has increased by about 25% since the beginnings of life (Newman and Rood, 1977, Gough, 1981). Physical mechanisms exist that disassociate water into its hydrogen and oxygen components, for example through photo-dissociation of water by ultraviolet light (Chang, 2000). Photo-dissociation of methane has been suggested as a more important path to loosing the hydrogen necessary for water (Catling et al., 2001). Free hydrogen, being very light, can escape Earth’s gravity and drift into space, being dragged along by the solar wind. This loss of hydrogen would have lead to a gradual depletion of the Earth’s water (Lovelock, 2005). However, photosynthetic life sequesters oxygen from carbon dioxide thereby providing the potentiality for its recombination with the free hydrogen to produce water. For example, hydrogen sulfide is oxidized by aerobic chemoautotrophic bacteria, giving water as a waste product (Lovelock, 1988). Oxygen released by photosynthetic life also forms ozone in the upper atmosphere which protects water vapor and methane in the lower atmosphere from ultraviolet photo-dissociation. In this manner, the amount of water on Earth has been kept relatively constant since the beginnings of life.

It has been estimated that about 496,000 km3 of water is evaporated yearly, with 425,000 km3 (86%) of this from the ocean surface and the remaining 71,000 km3 (14%) from the land (Hubbart and Pidwirny, 2007). Evaporation rates depend on numerous physical factors such as insolation, absorption properties of air and water, temperature, relative humidity, and local wind speed. Most of these factors are non-linearly coupled. For example, local variations in sea surface temperature due to differential photon absorption rates caused by clouds or local phytoplankton blooms, leads to local wind currents. Global winds are driven by latitude variation of the solar irradiance and absorption, and the rotation of the Earth. Relative humidity is a function of temperature but also a function of the quantity of microscopic particles available for seeds of condensation (a significant amount of which are supplied by biology (Lovelock, 1988)).

The couplings of the different factors affecting the water cycle imply that quantifying the effect of biology on the cycle is difficult. However, simulations using climate models taking into account the important physical factors have been used to estimate the importance of vegetation on land to evapotranspiration. Kleidon (2008) has shown that without plants, average evaporation rates on land would decrease from their actual average values of 2.4 mm/d to 1.4 mm/d, suggesting that plants may be responsible for as much as 42% of the actual evaporation over land. There appears to be little recognition in the literature of the importance of cyanobacteria and other organic matter floating at the ocean surface to evaporation rates. Irrespective of other factors such as wind speed and humidity, evaporation rates should be at least related to the energy deposited in the sea surface layer. A calculation can therefore be made of the effect of biology on the evaporation rates over oceans and lakes.
Before attempting such a calculation, it is relevant to review the biological nature of the air / sea surface interface, and energy transfer within this layer, based on knowledge that has emerged over the last decade. This skin surface layer of roughly 1 mm thickness has its particular ecosystem of high density in organic material (up to 104 the density in water slightly below (Grammatika and Zimmerman, 2001)). This is due to the scavenging action of rising air bubbles due to breaking waves, surface tension, and natural buoyancy (Grammatika and Zimmerman, 2001). The organic material consists of cyanobacteria, diatoms, viruses, free floating RNA/DNA, and other living and non-living organic material such as chlorophyll and other pigments. Most of the heat exchange between the ocean and atmosphere of today occurs from within this upper 1 mm of ocean water. For example, most of the radiated infrared radiation from the sea comes from the upper 100 µ m (Schlussel, 1999). About 52% of the heat transfer from this ocean layer to atmosphere is in the form of latent heat (evaporation), radiated longwave radiation accounts for 33%, and sensible heat through direct conduction accounts for the remaining 15%.

Science, calculations, tables skipped; go to original.

Some theories have the origin of life dissipating other sources of free energy, such as chemical energy released from hydrothermal vents at deep ocean trenches. Whether life originated to dissipate the free energy in sunlight or the free energy in made available through chemical transformations, the quantity of life at hydrothermal vents today corresponds to a very minute portion of all life on Earth implying that its contribution to the actual entropy production of the Earth can be considered negligible. The rich ecosystems existing at these vents are, in fact, not completely autonomous, but dependent on the dissolved oxygen and nutrients of photosynthetic life living closer to the surface. An Earth without photosynthetic life would thus correspond to one in a wholly different class of thermodynamic stationary states, one probably with little involvement of a water cycle.

Evidence for Evolutionary Increases in the Water Cycle

Plants, far from eliminating transpiration as a wasteful use of free energy, have in fact evolved over time ever more efficient water transport and transpiration systems (Sperry, 2003). There is a general trend in evolution, and in ecosystem succession over shorter times, to ever increasing transpiration rates. For example, conifer forests are more efficient at transpiration than deciduous forests principally because of the greater surface area offered for evaporation by the needles as compared to the leaves. Conifers appeared later in the fossil record (late carboniferous) and appear in the late successional stage of ecosystems. Root systems are also much more extended in late evolutionary and successional species, allowing them to access water at ever greater depths (Raven and Edwards, 2001). New pigments besides chlorophyll have appeared in the evolutionary history of plants and cyanobacteria, covering an ever greater portion of the intense region of the solar spectrum, even though they have little or no effect on photosynthesis, for example, the carotenoids in plants, or the MAA’s found in phytoplankton which absorb across the UVB and UVA regions (310-400 nm) (Whitehead and Hedges, 2002). This is particularly notable in red algae, for example, where its total absorption spectrum has little correspondence with its photosynthetic activation spectrum (Berkaloff et al., 1971).

There exist complex mechanisms in plants to dissipate photons directly into heat, bypassing completely photosynthesis. These mechanisms involve inducing particular electronic de-excitations using dedicated enzymes and proteins and come in a number of distinct classes. Constitutive mechanisms, allow for intersystem crossing of the excited chlorophyll molecule into triplet, long-lived, states which are subsequently quenched by energy transfer to the carotenoids. Inducible mechanisms can be regulated by the plant itself, for example, changing lumen pH causes the production of special enzymes that permit the non-photochemical de-excitation of chlorophyll. Sustained mechanisms are similar to inducible mechanisms but have been adapted to long term environmental stress. For example, over-wintering evergreen leaves produce little photosynthesis due to the extreme cold but continue transpiring by absorbing photons and degrading these to heat through non-photochemical de-excitation of chlorophyll. Hitherto, these mechanisms were considered as “safety valves” for photosynthesis, protecting the photosynthetic apparatus against light-induced damage (Niyogi, 2000). However, their existence and evolution can better be understood in a thermodynamic context as augmenting the entropy production potential of the plant through increased transpiration.

The recent findings of microsporine-like amino acids (MAAs) produced by plants and phytoplankton having strong absorption properties in the UVB and UVA regions follows their discovery in fungi (Leach, 1965). They are small (< 400 Da), water-soluble compounds composed of aminocyclohexenone or aminocycloheximine rings with nitrogen or imino alcohol substituents (Carreto et al., 1990) which display strong UV absorption maximum between 310 and 360 nm and high molar extinction (Whitehead and Hedges, 2002). These molecules have been assigned a UV photoprotective role in these organisms, but this appears dubious since more than 20 MAAs have been found in the same organism, each with different but overlapping absorption spectrum, determined by the particular molecular side chain (Whitehead and Hedges, 2002). If their principle function were photoprotective, there existence would be confined to those UV wavelengths that cause damage to the organism, and not to the whole UV broadband spectrum.

Plants also perform a free energy intensive process known as photorespiration in which O2 instead of CO2 is captured by the binding enzyme RuBisCo, the main enzyme of the light iindependent part of photosynthesis. This capture of O2 instead of CO2 (occurring about 25% of the time) is detrimental to the plant for a number of reasons, including the production of toxins that must be removed (Govindjee, 2005) and does not lead to ATP production. There is no apparent utility to the plant in performing photorespiration and in fact it reduces the efficiency of photosynthesis. It has often been considered as an “evolutionary relic” (Niyogi, 2000), still existing from the days when O2 was less prevalent in the atmosphere than today and CO2 more so (0.78% CO2 by volume at the rise of land plants during the Ordovician (ca. 470 Ma) compared with only 0.038% today). However, such an explanation is not in accord with the known efficacy of natural selection to eliminate useless or wasteful processes. Another theory has photorespiration as a way to dissipate excess photons and electrons and thus protect the plants photosynthesizing system from excess light-induced damage (Niyogi, 2000). Since photorespiration is common to all C3 plants, independent of their insolation environments, it is more plausible that photorespiration, being completely analogous to photosynthesis with respect to the dissipation of light into heat in the presence of water (by quenching of excited chlorophylls) and subsequent transpiration of water, is retained for its complimentary role in evapotranspiration and thus entropy production.

Plants not only evaporate water during sunlight hours, but also at night (Snyder et al., 2003). Common house plants evaporate up to 1/3 of the daily transpired water at night (Hernández Candía, 2009). Not all the stoma in C3 and C4 photosynthetic plants are closed at night and some water vapor also diffuses through the cuticle at night. The physiological reason, in benefit of the plant, for night transpiration, if one exists, remains unclear. It, of course, can have no relevance to cooling leaves for optimal photosynthetic rates. Explanations range from improving nutrient acquisition, recovery of water conductance from stressful daytime xylem cavitation events, and preventing excess leaf turgor when water potentials become large during the day (Snyder et al., 2003). However, night transpiration is less of an enigma if considered as a complement to the thermodynamic function of life to augment the entropy production of Earth through the water cycle. In this context, it is also relevant that chlorophyll has an anomalous absorption peak in the infrared at between about 4,000 and 10,000 nm (Gates, 1980), close to the wavelength at which the blackbody radiation of the Earth’s surface at 14 °C peaks.

Cyanobacteria have been found to be living within Antarctic ice at depths of up to 2 m. These bacteria and other lithogenic material absorb solar radiation which causes the formation of liquid water within the ice even though the outside air temperatures may be well below freezing. This heating from below causes excess ablation and sublimation of the overlying ice at rates as high as 30 cm per year (Priscu et al., 2005).
Finally, by analyzing latent heat fluxes (evaporation) and the CO2 flux for plants from various published data sets, Wang et al. (2007) have found vanishing derivatives of transpiration rates with respect to leaf temperature and CO2 flux, suggesting a maximum transpiration rate for plants, i.e. that the particular partition of latent and sensible heat fluxes is such that it leads to a leaf temperature and leaf water potential giving maximal transpiration rates, and thus maximal production of entropy (Wang et al., 2007).

The Function of Animals

If the primary thermodynamic function of the plants and cyanobacteria is to augment the entropy production of the Earth by absorbing light in the presence of liquid water, it may then be asked: What is the function of higher mobile animal life? Because of their intricate root system which allows the plants to draw up water for evaporation from great depths, plants are not mobile and depend on insects and other animals for their supply of nutrients, cross fertilization, and seed dissemination and dispersal into new environments. The mobility and the short life span of insects and animals mean that through excrement and eventual death, they provide a reliable mechanism for dispersal of nutrients and seeds.

Crustaceans and animal marine life in water perform a similar function as insect and animal life on land. These higher forms of life distribute nutrients throughout the ocean surface through excrement and dying. It is noteworthy that dead fish and mammals do not sink rapidly to the bottom of the sea, but remain floating for considerable time on the surface where, as on land, bacteria break down the organism into its components, allowing photon dissipating phytoplankton to reuse the nutrients, particularly nitrogen. It is interesting that many algae blooms produce a neurotoxin with apparently no other end than to kill higher marine life. There is also a continual cycling of nutrients from the depths of the ocean to the surface as deep diving mammals preying on bottom feeders release nutrients at the surface through excrement and death. Because of this cycling and mobility of animals, a much larger portion of the ocean surface is rendered suitable for phytoplankton growth, offering a much larger area for efficient surface absorption of sunlight and evaporation of water than would otherwise be the case.

From this thermodynamic viewpoint, animal life provides a specialized gardening service to the plants and cyanobacteria, which in turn catalyze the absorption and dissipation of sunlight in the presence of water, promoting entropy production through the water cycle. There is strong empirical evidence suggesting that ecosystem complexity, in terms of species diversity, is correlated with potential evapotranspiration (Gaston, 2000). The traditional ecological pyramid should thus be turned on its pinnacle. Instead of plants and phytoplankton being considered as the base that sustains animal life, animals are in fact the unwitting but content servants of plant and phytoplankton life, obtaining thermodynamic relevance only in how they increase the plant and phytoplankton potential for evaporation of water.


We have argued that the basic thermodynamic function of life (and organic material in general) is to absorb and dissipate high energy photons such that the heat can be absorbed by liquid water and eventually transferred to space through the water cycle. Photosynthesis, although relevant to cyanobacteria and plant growth, has only minor relevance to the thermodynamic function of life. Augmenting the water cycle through increased photon absorption and radiation-less relaxation, life augments the entropy production of the Earth in its interaction with its solar environment. We have presented empirical evidence indicating that the evolutionary history of Earth’s biosphere is one of increased photon absorption and dissipation over time, whether on shorter successional, or longer evolutionary, time scales.

This thermodynamic perspective on life views it as a catalyst for entropy production through the water cycle, and ocean and wind currents. It ties biotic processes to abiotic processes with the universal goal of increasing Earth’s global entropy production and thus provides a framework within which coevolution of the biotic with the abiotic can be accommodated. In important distinction to the hypothesis of Gaia, that mixed biotic-abiotic mechanisms have evolved to maintain the conditions on Earth suitable to life, it is here suggested instead that these biotic-abiotic mechanisms have evolved to augment the entropy production of Earth, principally, but not exclusively, through the facilitation of the water cycle. Life, as we know it, is an important, perhaps even inevitable, but certainly not indispensable, catalyst for the production of entropy on Earth.

How does life arise from randomness? / Physics to the rescue…

see also:

Why does life exist?

For figures and illustrations go to original. 

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”


England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

This situation changed in the late 1990s, due primarily to the work of Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.

David Kaplan explains how the law of increasing entropy could drive random bits of matter into the stable, orderly structures of life. Filming by Tom Hurwitz and Richard Fleming. Editing and motion graphics by Tom McNamara. Music by Podington Bear.

Using Jarzynski and Crooks’ formulation, he derived a generalization of the second law of thermodynamics that holds for systems of particles with certain characteristics: The systems are strongly driven by an external energy source such as an electromagnetic wave, and they can dump heat into a surrounding bath. This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. The finding makes intuitive sense: Particles tend to dissipate more energy when they resonate with a driving force, or move in the direction it is pushing them, and they are more likely to move in that direction than any other at any given moment.

“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.

Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating. He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising.

The chemistry of the primordial soup, random mutations, geography, catastrophic events and countless other factors have contributed to the fine details of Earth’s diverse flora and fauna. But according to England’s theory, the underlying principle driving the whole process is dissipation-driven adaptation of matter.

This principle would apply to inanimate matter as well. “It is very tempting to speculate about what phenomena in nature we can now fit under this big tent of dissipation-driven adaptive organization,” England said. “Many examples could just be right under our nose, but because we haven’t been looking for them we haven’t noticed them.”

Scientists have already observed self-replication in nonliving systems. According to new research led by Philip Marcus of the University of California, Berkeley, and reported in Physical Review Letters in August, vortices in turbulent fluids spontaneously replicate themselves by drawing energy from shear in the surrounding fluid. And in a paper appearing online this week in Proceedings of the National Academy of Sciences, Michael Brenner, a professor of applied mathematics and physics at Harvard, and his collaborators present theoretical models and simulations of microstructures that self-replicate. These clusters of specially coated microspheres dissipate energy by roping nearby spheres into forming identical clusters. “This connects very much to what Jeremy is saying,” Brenner said.

Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said. Condensation, wind and viscous drag are the relevant processes in these particular cases.

“He is making me think that the distinction between living and nonliving matter is not sharp,” said Carl Franck, a biological physicist at Cornell University, in an email. “I’m particularly impressed by this notion when one considers systems as small as chemical circuits involving a few biomolecules.”

Prentiss, who runs an experimental biophysics lab at Harvard, says England’s theory could be tested by comparing cells with different mutations and looking for a correlation between the amount of energy the cells dissipate and their replication rates. “One has to be careful because any mutation might do many things,” she said. “But if one kept doing many of these experiments on different systems and if [dissipation and replication success] are indeed correlated, that would suggest this is the correct organizing principle.”

Brenner said he hopes to connect England’s theory to his own microsphere constructions and determine whether the theory correctly predicts which self-replication and self-assembly processes can occur — “a fundamental question in science,” he said.

Having an overarching principle of life and evolution would give researchers a broader perspective on the emergence of structure and function in living things, many of the researchers said. “Natural selection doesn’t explain certain characteristics,” said Ard Louis, a biophysicist at Oxford University, in an email. These characteristics include a heritable change to gene expression called methylation, increases in complexity in the absence of natural selection, and certain molecular changes Louis has recently studied.

If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation

and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,” Louis said.

“People often get stuck in thinking about individual problems,” Prentiss said. Whether or not England’s ideas turn out to be exactly right, she said, “thinking more broadly is where many scientific breakthroughs are made.”

Emily Singer contributed reporting. This article was reprinted on and

%d bloggers like this: