Accidental beliefs / Where were you born?

Most people don’t choose their beliefs; their beliefs are culturally inherited. 

SEE ALSO: “Religious States of America, in 22 maps”

If only modern neotenic Homo sapiens could see itself…

More disgusting behavior:

Down and Dirty Primitive Hunting Technology / Videos

HUNGER: The prime motivator of human behavior and technology. Primitive tools compensate for “puny human” lack of claws, reduced olfactory sense, and other assets possessed by the competition: other hungry animals, including many much smaller than humans, had superior strength, speed, meat-or tough vegetation-tearing teeth (cooking required), protective fur, athletic ability, specialized body parts and instinctive tactics. Early humans HAD TO develop tools!

Our type of brain most likely developed as a “tool” that compensated for (and competed with) the “equipment” of other animals in particular environments. The brain as technology – think about it! LOL

Body language: The crotch displays of men (primates)

Right: This is one area where I’m relieved that “social conventions” restrict males from walking around naked. If they did, we’d have to put up with this type of behavior. LOL

Bipedalism was not a result of the crotch display, but it gave male bipeds a great opportunity to enhance traditional primate displays, and to threaten and intimidate other males. 


The original “blue balls”

Body language: The crotch displays of men

(nipped for brevity)

One way in which males display dominance is by displaying their crotch…this behavior is something that we’ve inherited from our ancestors. The most common way in which men display their crotch is by taking up the thumbs-in-belt gesture.

Thumbs in belt or pockets

This gesture is used by men to display a dominant, sexually aggressive attitude. It’s perhaps the most direct sexual display a man can make towards a woman. (You’ve got to be joking!) Men also use this gesture to stake their territory or to show other men that they’re not afraid. This gesture communicates the non-verbal message, “I am virile, powerful and dominant”. 

The Obama White House criticized Putin’s posture. I guess the seated crotch display, when done properly,  does intimidate the Hell out of other males. LOL

In a seated position, it becomes kind of difficult for men to assume this gesture but they don’t shy away from displaying their crotch if they want to communicate the message of dominance. They’ll spread their legs and lean slightly backward so that their crotch comes forward and in full display.

Watch any group of young men who’re engaged in an activity that requires them to display a macho attitude and you’ll notice that they often stand with their legs apart and their hands somehow highlight their crotch.

For instance, when sports teams are ready for ‘action’ you may notice the players continually adjusting and re-adjusting their crotch as they unconsciously try to assert their masculinity. Interestingly, this crotch display gesture is also seen in apes and some other primates. Even though the apes don’t wear any belt or trousers, still they highlight their crotch with their hands when they have to stake their territory and show other apes that they’re unafraid.

Some primates such as baboons are a bit more direct. They display dominance by spreading their legs and displaying their penis, giving it continual adjustment or even waving it at their enemies.

What’s even more mind-boggling is that the same penis-waving tactic is also employed by some New Guinea tribes even today who are essentially cut off from modern civilization.

This clearly indicates that such a behavior is an evolved tendency in homo sapiens.

Dropping the pants

I must have been around 11 or 12 years old. It was a bright Sunday morning and we had arranged a cricket match with some schoolmates. Everything was normal as the game progressed and as usual, both the teams rejoiced at the high points and wore disappointed expressions at the low points of the game.

A rather strange thing happened when the game was over. It was a narrow contest right to the end but our team lost. Needless to say, the other team was elated. They jumped with joy, yelled and screamed. But one particular boy was over-excited. He felt so powerful and dominant due to the win that he dropped his pants and showed his penis to our team. (Why not to the other team?)

My team-mates laughed it off but I was taken aback.

I never forgot that incident. I wanted to know why he did that. What possible motive or desire could force a person to resort to such an extreme behavior? (Was the writer really so naive?)

It remained an unanswered question, an unresolved problem in my psyche for a long time until years later, when I read about human evolution and body language, the whole picture became clear to me.

Another similar and common incident that men experience at least once in their lives is when they jokingly question the size of their friend’s penis, the latter usually gets defensive and retorts with something like, “If I show it to you guys, you’ll become afraid and run away”. (Really? Guys say this?)

He may not realize it but unconsciously he knows that the penis display is an effective way to display dominance, and so do his friends.

I’m sure you’re intelligent enough to understand, by now, why people display their middle fingers when they want to offend someone and/or to feel dominant.

It’s not an acceptable behavior anymore in a civilized society for adults to drop their pants and show their penises so they use their middle fingers to symbolically convey the same feelings.

Some of you might ask, “Why do women who wear jeans assume the ‘thumbs-in-belt’ gesture?” or “Why do women show their middle fingers, when they have no actual penises to display?”

Well, it’s most probably a behavior that they’ve learned from men. (Ya think?) Penis display, symbolical or not, has come to be strongly associated with offending someone or showing dominance in the human psyche, thanks to its effectiveness.

I’m sure you’re intelligent enough to understand, by now, why people display their middle fingers when they want to offend someone and/or to feel dominant.

It’s not an acceptable behavior anymore in a civilized society for adults to drop their pants and show their penises so they use their middle fingers to symbolically convey the same feelings.

So, women are just using a tool from men’s psychological repertoire because they know how effective it can be.

Subtle forms of crotch display

No, no, no, never…


Belt and crotch grabbing while dancing is a subtle (?) form of crotch display and men across different cultures do it- from Michael Jackson to Salman Khan. Other subtle forms include wearing tight fitting pants, small-size speedo swimming trunks or even dangling a large bunch of keys/chains on the front or side of the crotch.

Baseball players are particularly “crotch grab prone”. The NHL crotch grab: a puck to the nuts. 

Rather ambiguous message, don’t you think?

The wallets that have those chains dangling on the side of the crotch became popular among men because it helped them draw attention to their crotch. 

To conclude consider what George Carlin, the late American comedian, had to say about wars:

“War is nothing but a whole lot of prick-waving. War is just a lot of men standing around in a field waving their pricks at one another. Of course, the bombs, the rockets, and the bullets are all shaped like dicks. It’s a subconscious need to project the penis into other people’s affairs.”

Are Japanese women tired of neotenic males, perhaps? 

Caption: REALISTIC male mannequins. How pitiful…

Hunter-Gatherer Economies in the Old World and New World

Hunter-Gatherer Economies in the Old World and New World  

Christopher Morgan, Shannon Tushingham, Raven Garvey, Loukas Barton, and Robert Bettinger

March 2017

See original for figures.

At the global scale, conceptions of hunter-gatherer economies have changed considerably over time and these changes were strongly affected by larger trends in Western history, philosophy, science, and culture. Seen as either “savage” or “noble” at the dawn of the Enlightenment, hunter-gatherers have been regarded as everything from holdovers from a basal level of human development, to affluent, ecologically-informed foragers, and ultimately to this: an extremely diverse economic orientation entailing the fullest scope of human behavioral diversity.

The only thing linking studies of hunter-gatherers over time is consequently simply the definition of the term: people whose economic mode of production centers on wild resources. When hunter-gatherers are considered outside the general realm of their shared subsistence economies, it is clear that their behavioral diversity rivals or exceeds that of other economic orientations. Hunter-gatherer behaviors range in a multivariate continuum from: a focus on mainly large fauna to broad, wild plant-based diets similar to those of agriculturalists; from extremely mobile to sedentary; from relying on simple, generalized technologies to very specialized ones; from egalitarian sharing economies to privatized competitive ones; and from nuclear family or band-level to centralized and hierarchical decision-making. It is clear, however, that hunting and gathering modes of production had to have preceded and thus given rise to agricultural ones.

What research into the development of human economies shows is that transitions from one type of hunting and gathering to another, or alternatively to agricultural modes of production, can take many different evolutionary pathways. The important thing to recognize is that behaviors which were essential to the development of agriculture—landscape modification, intensive labor practices, the division of labor and the production, storage, and redistribution of surplus—were present in a range of hunter-gatherer societies beginning at least as early as the Late Pleistocene in Africa, Europe, Asia, and the Americas. Whether these behaviors eventually led to the development of agriculture depended in part on the development of a less variable and CO2-rich climatic regime and atmosphere during the Holocene, but also a change in the social relations of production to allow for hoarding privatized resources. In the 20th and 21st centuries, ethnographic and archaeological research shows that modern and ancient peoples adopt or even revert to hunting and gathering after having engaged in agricultural or industrial pursuits when conditions allow and that macroeconomic perspectives often mask considerable intragroup diversity in economic decision making: the pursuits and goals of women versus men and young versus old within groups are often quite different or even at odds with one another, but often articulate to form cohesive and adaptive economic wholes. The future of hunter-gatherer research will be tested by the continued decline in traditional hunting and gathering but will also benefit from observation of people who revert to or supplement their income with wild resources. It will also draw heavily from archaeology, which holds considerable potential to document and explain the full range of human behavioral diversity, hunter-gatherer or otherwise, over the longest of timeframes and the broadest geographic scope.

In the strictest sense, the term “hunter-gatherer” simply refers to people entirely dependent on and only interacting with wild plants and animals. The definition is therefore an inherently economic one, with subsistence regime determining whether a given group is subsumed within this overarching anthropological type based solely on whether the group in question derives its sustenance from fishing, foraging, or hunting wild plants and animals. In the broadest sense, then, hunter-gatherers are people whose basic patterns of life—where they live, who they live with, and both their daily routines and the seasonal variation in those routines—are best explained by their connection to the pursuit and consumption of wild species. They are consequently identified as much by what they consume as by what they do not: in the case of the latter, domesticated plants and animals.

Hunter-gatherers are thus often juxtaposed with agriculturalists (including modern societies supported by industrial agriculture) based on perceived fundamental differences between not only their respective economies, but also their technologies, population densities, and degrees and types of sociocultural complexity. While it is clear that the vast and unprecedented numbers of people currently living in the complex, interconnected, and global modern world could not be supported without reliance on domesticated plants and animals, what is much less clear is how hunting and gathering gave rise to agricultural economies and the degree to which hunting and gathering differs in either kind or quantity from economies relying mainly on domesticates.

Within this context, what this chapter presents is fourfold. First, how hunter-gatherer economies have been thought about over time has been conditioned largely by historical circumstance and by changes in the social sciences more broadly. Second, while the drivers of change in hunter-gatherer economies are often linked to changes in climate, environment, and demography, the way these changes play out is often determined by culture as well—kinship, social norms, power relationships, and the like. Third, there is remarkable diversity in hunter-gatherer economies and lifeways, past and present, and this diversity is often marked by many of the characteristics more typically associated with economies reliant on domesticates. Lastly, we show how current and future studies of hunter-gatherer economies hinge on fundamental questions and methods that inform and are informed by intersections with the natural, social, behavioral, and cognitive sciences.

Changing Conceptions of Hunter-Gatherers

While views on hunter-gatherers have changed considerably over time, theoretical approaches to hunter-gatherer economies and lifeways tend to be materialist, focused on the physical conditions faced by hunter-gatherer groups (e.g., environment, climate, and technology). Some of the earliest ideas about hunter-gatherers, for example, emerged during the European age of exploration, when explorers, traders, and colonists encountered indigenous peoples, many of them hunter-gatherers, of which many had already been shunted to marginal environmental settings by the time the first historical descriptions were made of the way they lived (Figure 1). These early accounts cast hunter-gatherers as primitive, disadvantaged, and culturally backwards people who led meager and pitiable lives, where the fear of starvation and death was constant in a life that was, as Hobbes (1962, p. 100) framed it in 1651, “solitary, poor, nasty, brutish, and short.”

In contrast, in 1672 Dryden (1978, p. 30) coined the term “noble savage” in his play The Conquest of Granada to describe an initial, free, and unencumbered state of human existence, a perspective used by Jean Jacques Rousseau, Michel de Montaigne and other Enlightenment thinkers to draw contrasts between “civilized” Europe and the “savage” peoples of, for instance, the Americas. During the European age of exploration and colonial expansion, the view of hunter-gatherers as primitives who represented a basal state of human socioeconomic and technological development became firmly entrenched in philosophical and scientific works.

Figure 1. Location of hunter-gatherer ethnographic groups (in regular font) and archaeological cultures and sites (in italics) mentioned in the text.

This view colored the progressive social evolutionary theory of the 19th and early 20th centuries as set forth by Spencer (1868), Tylor (1871), and others. From this perspective, cultural evolution progressed one way, from simple (lower and primitive) to more complex (higher and more civilized) forms; for example, from savagery to barbarism to civilization in Lewis Henry Morgan’s (1877) seminal evolutionary scheme. Such unidirectional frameworks explicitly viewed material concerns as alleviated by advances in technology, with technological change marking the shift from one stage to the next.

For example, the emergence of early agriculture (“Middle Barbarism” to Morgan) moved humanity into an evolutionary stage wherein the acquisition of food was much less a concern than it was in the previous, and more primitive, Hobbesian universe. Freed from the perpetual quest for food, people could focus on more advanced social, moral, and religious concerns. In these scenarios, evolution began with hunter-gatherers—the “zero of human society” (Morgan, 1851, pp. 347–348), whose problems centered around food acquisition by individuals with woefully limited intelligence, information, and technology—to a more social world where the problem was not centered on getting food but rather about getting along with one’s neighbors.

During this time, however, British and American anthropological perspectives varied in terms of how they viewed and evaluated hunter-gatherers and their economies. Herbert Spencer, an Englishman, saw evolution as having reached its apogee in Western culture where the problems of social progress were essentially solved; questions about how hunter-gatherers made their living were therefore irrelevant. In contrast, American scholars (e.g., Powell, 1888) were interested in how and why technologies, economies, and social systems changed, thus making native peoples—many of whom lived within the boundaries of the United States—worthy of study. Between 1880 and 1920 the Americanist tradition emphasized surveys of hunter-gatherers in places like the Pacific Northwest Coast, the Great Lakes region, and the arid lands between the Rocky and Sierra Nevada Mountains. John Wesley Powell, a leader in these efforts, found considerable diversity in how the hunter-gatherers of North America lived and made their living (e.g., from settled, sedentary groups like the Kwakiutl who relied on smoked and stored salmon to small, highly mobile groups like the Ute of the North American Great Basin who relied in large part on pine nuts and small seeds).

In an explicit rejection of progressive social evolution, eugenics, and the social Darwinist policies that emerged in the early 1900s, American anthropologists and archaeologists by 1920 began emphasizing culture historical frameworks in their research. Culture history in archaeology, influenced by Boasian cultural relativism (e.g., Boas, 1940)—the idea that each culture was unique and developed along its own particular trajectory—emphasizes description over explanation. In culture-historical schemes, changes in hunter-gatherer economies were seen as the result of historical processes like the migration of people who carried with them different technologies and ways of making a living, or the diffusion of ideas, technologies, and subsistence practices from one area to the other, either of which could change the material culture identified in archaeological and ethnographical studies. Left unexplained was how novel behaviors and technologies developed in the first place.

Beginning in the 1960s, cultural historical and unilineal evolutionary frameworks were challenged by a new and more nuanced evolutionary one (Flannery, 1968; Lee, 1968). Anthropologists began to recognize that hunter-gatherers did not conform to simple evolutionary models and began comparing notes—most famously at the Man the Hunter conference held at the University of Chicago in 1966 (Lee & Devore, 1968). Though a more-or-less unitary view of hunter-gatherers persisted for nearly another decade following this meeting, by the late 1970s a great deal of diversity in hunter-gatherer lifeways had been recognized and the notion that these differences developed along multiple evolutionary pathways was in vogue. No longer viewed as “unevolved,” hunter-gatherers were now widely acknowledged to have rich, complex social and religious lives and were regarded as masters of their environments.

From this perspective, hunter-gatherers were universally adept at creating sophisticated adaptive systems specific to their ecological circumstances. They were advantaged peoples who lived in a state of homeostatic equilibrium and only resorted to agriculture if their way of life was disturbed by colonial or other forces. This of course turned the notion of hunter-gatherers as lowly primitives (and farming as a logical outcome of evolutionary progress) on its head. Now cast as the “original affluent society” (Sahlins, 1968), it seemed that hunter-gatherers were healthier and often worked less than farmers. Lee (1984), for example, showed that the considerable leisure time enjoyed by the Dobe !Kung of the Kalahari desert was due in part to the abundance of mongongo nuts (Schinziophyton rautanenii), though he failed to account for the large amount of work the women did to process those nuts.

Hunter-gatherers, however, were and still are often portrayed as “generalized foragers”: mobile people who live in small groups, have few possessions, store almost nothing, exploit only the seasonal availability of wild food, and lead relatively egalitarian lifestyles. However, as Price and Brown (1985, p. xiii) note, “the traditional dichotomy of forager versus farmer has little significance with regard to the organizational development of society—that means of subsistence do not dictate levels of cultural complexity. Indeed, there are abundant examples of hunter-gatherers who diverge from this generalization, for example the Ainu of Japan, the various groups from the Pacific Northwest Coast of North America, many California Indians, the Colusa of Florida, and people known only through archaeology such as the Jomon of Japan, European Mesolithic groups, and the Natufian of the Levant. Each of these groups were sedentary or semi-sedentary and their economies entailed some degree of surplus economic production, storage, diverse and specialized technologies, and varying degrees of wealth, power, and prestige-based inequality. Many of these groups were marked by high population densities, in some cases rivaling those of sedentary farming communities (Kelly, 2013).

Crisis and Controversy in Understanding Hunter-Gatherer Economies


Opinions vary widely regarding such deceptively simple ideas about what hunter-gatherers actually are and what drives change in hunter-gatherer economies. Controversial topics include fundamental issues of definition and identification, what causes diversity among hunter-gatherer groups, and what drives more intensive economic and technological change within economies dependent only on wild food as well as those economies based on a mixture of both wild and domesticated resources.

What Hunter-Gatherer Economies Are, and Are Not

Hunter-gatherers are people who rely upon and only interact with wild plants and animals. In practice, however, there are important exceptions. Most widespread is hunter-gatherer cultivation of non-food plants such as tobacco (Kroeber, 1941), pen-raising of wild animals, for example eagles for plumage (Drucker, 1937), and the near-universal keeping of domestic dogs as pets, packers, pullers, hunters, sentries, or as food (Barton et al., 2009; Larson et al., 2012). In addition, many ethnographic and modern hunter-gatherers living near agriculturalists borrowed and cultivated the crops of their neighbors on a very small scale. The Southern Paiute living just north of the agricultural American Southwest (Kelly, 1964), for example, grew maize as a dietary supplement. But they did not do so to the extent that it had much effect on preexisting patterns of settlement and social aggregation which had developed to facilitate the procurement and storage of the many important wild plants and animals on which these groups had formerly depended entirely.

That hunter-gatherers engaged in so many forms of environmental manipulation shows that their resistance towards fully adopting plant and animal husbandry was not the result of ignorance, as was commonly assumed in the late 19th and early 20th centuries. In contrast to the view of hunting and gathering as a lifeway of limitation and ignorance, the archaeological and ethnographic records provide ample evidence that hunter-gatherers understand the natural world in which they live every bit as well as and arguably better than agriculturalists, lacking which they would surely have rapidly perished. While most hunter-gatherers consistently depend on a fairly restricted suite of plants and animals, they routinely maintain and transmit from generation to generation knowledge regarding many times more plants that might be pressed into service during periods of hardship, those which were poisonous and those which were not, and how the poisonous ones might be processed to remove the toxins that would prove fatal if not removed.

Perhaps more pervasive in the current literature is the opposite view that has hunter-gatherers living in balanced harmony with nature—conserving resources for the benefit of themselves and nature at large. Whether humans were ultimately responsible for large-scale megafaunal extinctions in the late Pleistocene, between 10.5 and 15 kya, in the New World (Martin, 1973), or in Australia between 40 and 60 kya (Johnson, 2006), for example, is contentious, but even those scholars inclined to absolve hunter-gatherers of responsibility do not cite innate conservationism as a reason, most of the evidence suggesting the contrary. North American hunter-gatherers occasionally killed much more than they could use, for example, as is well documented at the Olsen-Chubbuck Bison Kill Site in Colorado, where roughly 9,500 years ago a band of hunter-gatherers, having driven a herd of perhaps 200 of the now-extinct species Bison occidentalis into a steep arroyo, intensively butchered only a fraction and left a significant portion largely or completely untouched (Wheat, 1967).

Conservation certainly did not guide Native groups who participated in the extermination of the buffalo from the American Plains in the early 19th century. Euro-Americans bear greatest responsibility but Native Americans played a part—not thinking it necessary to limit their take of a shrinking resource, reasoning from traditional knowledge that buffalo herd size and reproduction were the result of forces beyond human control (Krech, 1999). Indeed, social customs may work against resource conservation even where hunter-gatherers are aware of the problem. Raven (1990) documents a case in which Torres Strait hunters-gatherers continued to target ever-shrinking populations of turtle and dugong, in large part because this was an essential male rite of passage, a prerequisite to marriage. Neither does a more general, microeconomic view of foraging support the foragers-as-lay-conservationists thesis. A broad diet—one not exclusively focused on the largest-bodied prey, which may be more vulnerable to overhunting due to slower life histories or more conservative reproductive strategies—may simply be a byproduct of rational decision-making motivated by self-preservation; instances of apparent conservation do not make the Plains buffalo or Torres Strait turtles exceptions to a rule.

Acknowledging and Identifying Hunter-Gatherer Diversity

The forgoing highlights the fact that there are marked differences in hunter-gatherer lifeways across both time and space. These differences hinge on issues relating to subsistence, technology, social organization, and environmental change over long timespans.


That hunter-gatherer adaptation revolves around subsistence makes variation in the relative emphasis on hunting, fishing (including the procurement of marine mammals and shellfish), and gathering critical. Wherever edible plants are available in any abundance—generally between 40° N and 40° S latitude—gathering dominated subsistence, a pattern accounting for 42% of hunter-gatherers worldwide in one ethnographic sample (Binford, 2001). As one moves poleward, fishing and hunting become more important, hunting dominating in about 24% of groups worldwide, fishing in the remaining 35%. The plant-dominated pattern, however, developed relatively late, becoming much more pronounced during the Holocene. Plants were always critical, but when human population densities are low relative to available resources, hunting, particularly of large game, typically produces superior rates of return and is favored over plants. As population grows and demand increases, hunter-gatherers will increasingly turn to plants, if they are available, to fish if they are not (Binford, 2001).

The diet breadth model (MacArthur & Pianka, 1966) makes it possible to compare hunter-gatherer standards of living from one group to another despite substantial differences in subsistence economy. This is accomplished by calculating the marginal rate of return below which hunter-gatherers will ignore a resource as being too costly to warrant procurement and processing. This entails calculating the handling time per calorie (kcal) and the amount of time expended per kcal in pursuing, collecting, and processing a resource once it is encountered. Resources are ranked from highest (least handling time) to lowest (most handling time) and added to the diet in that order, starting with the highest, which is always in the diet.

The second ranking resource is added to the diet if its handling time per kcal is less than the time it would take per kcal to search for and locate the first ranked resource (Figure 2). Following this logic, the overall return rate for a hunter-gatherer group cannot be higher than the handling time of the lowest ranked resource in the diet. Analyses from this perspective suggest that despite all outward appearances, hunter-gatherers living in California and dependent on acorn (Quercus, Lithocarpus) (Bettinger, Malhi, & McCarthy, 1997), living in Australia and dependent on seeds (Acacia) (O’Connell & Hawkes, 1981), and the Dobe !Kung living in Africa and dependent on mongongo nuts (Hawkes & O’Connell, 1981; Lee, 1984) were all operating at about the same marginal rate of return, the handling times for all three resources hovering around 750 kcal/hr. At this rate it would take about 10 hours of work a day to feed a family of four consisting of a father consuming 2,500 kcals per day, a mother consuming 2,000 kcals per day, and two children each consuming 1,500 kcals per day.

Figure 2. Comparison of a low-cost, narrow spectrum diet (left) with a higher-cost, broader-spectrum diet (right). Switching strategies to the broader-spectrum diet to include lower-ranked items is predicated on the abundance of the higher ranked item, which determines search time. In the narrow spectrum diet on the left, the overall rate of return is that of Resource 1, the only item in the diet. In the broader-spectrum diet on the right, the overall rate of return is that of Resource 2, the lowest-ranked item included in the diet.


While hunter-gatherer technology tracks variation in the relative importance of gathering, hunting, and fishing for obvious reasons, there is more to it than that, with the range and severity of seasonal change in precipitation and temperature being particularly important. In tropical environments that are warm year-round, resources are generally available somewhere, making mobility the best response to local resource shortage. As one moves away from the equator toward the poles, temperatures decrease, finally to the point that there are seasons with little or no resources available, a problem that mobility alone will not solve. Here hunter-gatherers must store resources in one season for use in another, which means they must be more efficient at procuring resources in quantity when available, which requires more costly and sophisticated technology. At the same time, storing resources tethers hunter-gatherers to the locations of their stores, reducing mobility (Testart, 1982; but see Morgan, 2012). This produces a generally inverse relationship between hunter-gatherer technological complexity and mobility: mobile hunter-gatherers have fewer and more generalized tools than more sedentary groups that store resources for seasons of shortfall. One observes, on the one hand, very simple and generalized (though highly effective) technology with relatively few tool types among the highly mobile hunter-gatherers of desert Australia like the Alyawara, who stored very little, and on the other hand, the intricately complex and specialized technology of the essentially sedentary Northwest Coast groups like the Tlingit, who relied extensively on stored resources.

Sociopolitical Organization

The vast bulk of ethnographic hunter-gatherers lived in simple bands made of 20–50 individuals (Kelly, 2013). In some cases these consisted of families headed by males related by patrilineal descent (patrilineal bands), in others males unrelated to each other, allied merely by convenience or friendship (bilateral bands), and in some cases much smaller groups consisting of a single nuclear family (family bands) with an assortment of the husband’s or wife’s close relatives who were at the time incapable of living on their own (e.g., an elderly widowed mother or father, unmarried brother or sister, etc.). But much more complex arrangements were possible. In the North American Pacific Northwest, for example, large social groups interacted on the basis of intricate systems of ranking, in theory making it possible to calculate the status of any two individuals relative to each other. While it is tempting to connect Pacific Northwest Coast social complexity with environmental richness (Ames, 1994) and the simpler forms of band organization with lesser resource productivity, it is worth noting that population densities in California rivaled those of hunter-gatherers anywhere, including those on the Pacific Northwest Coast, yet were accompanied by very simple band-like organizations (Bettinger, 2015).

Hunter-Gatherer Adaptation over Long Timespans

There is also substantial variation in hunter-gatherer economies over time. The temporal contrast between Pleistocene (largely hunting-focused, with subsidiary emphases on plants) and Holocene hunter-gatherers (largely plant-focused, predominantly in low and mid-latitude environments) is particularly sharp, for two main reasons. First, the hunting-gathering lifeway is older than our species (Homo sapiens). Accordingly, many behaviors basic to hunter-gatherer adaptation depend on physical capabilities (e.g., capacity for language) that certain of our more ancient predecessors lacked. This alone prevents drawing simple analogies between modern and pre-Homo sapiens hunter-gatherers. Second, Pleistocene and Holocene hunter-gatherers confronted dramatically different environments, the latter much more favorable to plant exploitation. Glacial periods during the Pleistocene, for instance, resulted in substantial decreases in sea level and shifted temperate biomes towards the equator, either of which could have resulted in resource distributions for which there may be no Holocene ecological analog. Pleistocene climate was also wildly variable, marked by repeated cycles of rapid warming during interglacials followed by several centuries of gradual cooling to glacial temperatures. Erratic climate change limited the development and perfection of complex cultural adaptations, behaviors, and innovations uniquely suited to cold conditions as these would have limited application when climate again turned warm. In addition, the Pleistocene atmosphere was carbon dioxide (CO2) poor, thus inhospitable to plants, which, in combination with rapid climate change, prevented the development of the sophisticated behaviors and technologies needed for the kind of intensive plant and animal procurement and environmental manipulation that might first enable and later support agriculture (Richerson, Boyd, & Bettinger, 2001) (Figure 3). This not only affected local resource availability but likely also created strong selective pressure for cultural adaptation, which is faster than genetic adaptation and therefore better able to keep pace with high frequency, high amplitude variations in climate (Richerson, Boyd, & Bettinger, 2009).

Figure 3. Top: Filtered δ‎18O Greenland ice core data showing markedly less variable and warmer Holocene versus Pleistocene paleotemperatures (data from Ditlevsen, Svensmark, & Johnsen, 1996). Middle: Reconstruction of Pleistocene and Holocene atmospheric CO2 derived from Antarctic ice core data. Bottom: Reconstruction of Holocene atmospheric CO2 derived from the same Antarctic ice core data (data from Barnola, Raynaud, Korotkevich, & Lorius, 1987; Genthon et al., 1987).

In addition to this, at any given time there has been substantial spatial variation in hunter-gatherer behavior and adaptation. These differences accumulated more rapidly in the more favorable and quiescent Holocene, which is particularly evident in the diversity of hunter-gatherer lifeways documented over the last four centuries by observers firsthand, which includes groups with population densities as low as one person per 400 square kilometers in the harsh deserts of Australia, and others with densities as high as one person per .3 square kilometers among the Chumash along the highly productive Santa Barbara coast of California. The ethnographic range in subsistence, technology, and sociopolitical organization is equally impressive.

Given these fundamental differences of biology and environment, most scholars agree that ethnographic accounts cannot be used to interpret Pleistocene records reliably. Wobst (1978) famously described the dangers of uncritical reliance on the ethnographic record: because ethnographic accounts are time-limited, they likely underestimate behavioral diversity, failing to capture important variation in gender- and age-specific activities, and daily, seasonal, and supra-annual foraging objectives, decisions, and outcomes. Moreover, there must surely be behavioral diversity that would be unaccounted for by even a complete and perfectly accurate ethnographic record. That is, all ethnographic data were gathered in an age when hunter-gatherers, no matter how remote, had been in some kind of contact with non-foraging groups. There were also likely prehistoric environments, social configurations, norms, beliefs, etc. for which there are no modern analogues. Identifying diversity unique to the prehistoric past is no simple task and requires modeling potential sources of diversity, a third contentious area of hunter-gatherer economics.

Accounting for Diversity and Change: Intensification, Innovation, and Surplus

The subject of what drove the development of more productive hunter-gatherer economic systems, surplus production, and innovation is critical not only to understanding hunter-gatherers, but also to understanding the factors driving people to develop and adopt other more intensive economic systems, agricultural or otherwise (Boserup, 1965; Morgan, 2015; Morrison, 1994). Many explanations hinge on climate change scenarios where increases in environmental productivity generate the potential for surplus that can support greater human population densities, as was apparently the case after the Pleistocene-Holocene transition (Richerson et al., 2001). Alternatively, decreases in environmental productivity might generate the impetus for extracting more energy from the environment through more labor, technological changes, or greater regional economic articulation.

Such was arguably the case, for instance, along the Southern California Bight, where it has been argued that megadroughts associated with the Medieval Climatic Anomaly (ca. 1.1–0.6 kya) led to more trade between Chumash islanders and mainlanders, facilitated by the invention or adoption of the tomol, a technologically-sophisticated sewn-plank canoe (Arnold, 1992). Others, however, see intrinsic rates of population increase driving economic intensification, the idea being that larger populations have to eat lower down the food chain because more calories are ultimately available in lower trophic levels. Doing so, however, comes at a cost, as extracting this energy usually requires substantial increases in labor, as entailed by the California acorn economies of such groups like the Pomo, Miwok, Mono, and Ohlone (Gifford, 1971; McCarthy, 1993). All these groups were marked by population densities rivaling or exceeding those of prehistoric agricultural groups (Baumhoff, 1963) living in what is now the southwestern and southeastern United States, but such densities were paid for in large part on an acorn-based subsistence economy which was remarkably costly in terms of the amount of labor needed to process acorn meal and remove the tannins from the acorns collected from California’s ubiquitous oak groves (Basgall, 1987; Tushingham & Bettinger, 2013).

Larger populations, however, are modelled not only as having met the threshold of population density necessary to ensure a higher likelihood of maintaining complex technologies like tomols due to better chance of successful transmission of knowledge from one generation to the next (Henrich, 2004; but see Collard, Buchanan, & O’Brien, 2013), but also in reaping the rewards of the considerable investments made in such technologies because the cost of these investments in terms of labor, materials, and maintenance is scalar: it is born by many (as opposed to a few) people over a long period of time, all of whom reap the benefit of such investments (Bettinger, Winterhalder, & McElreath, 2006).

Critical to any discussion of hunter-gatherer economic intensification is the notion of surplus. It is clear surplus production and storage are found mainly in mid-latitude, seasonal environments (mainly in the northern hemisphere) where salmon, acorns, and tubers were collected in bulk in the summer and fall, stored, and then eaten during the winter (Binford, 1980; Morgan, 2012). No doubt storage is a response to seasonality; the question is, how did it develop when most of what is known about ethnographic hunter-gatherers indicates that sharing (sometimes termed “tolerated theft”), rather than hoarding, is the norm (Jones, 1987; Winterhalder, 1996)? Simple models posit larger and more settled populations are the key, with storage tethering groups of people to fewer locales for extended portions of each year (Testart, 1982). More sophisticated models cope directly with the problem of tolerated theft by identifying the conditions under which the notion of private property might develop, in particular among household economies that can be more-or-less self-sufficient and therefore “pay” for their non-communitarian storage behaviors with meat sharing, which frees households to store the fruits of plant-oriented subsistence labor (Bettinger, 2015). Others see a more top-down causal mechanism, where aggrandizing “big men” garner prestige by throwing lavish feasts. What pays for the feasts is of course surplus production, which is appropriated by charismatic leaders. This is an inherently unstable socioeconomic situation, but one which could also conceivably lead to more permanent, inherited leadership roles and entrenched modes of surplus production, and perhaps even food production and domestication (Hayden, 1990). The key linkage here is one of political economy, where the development of intensive, surplus-generating economies is intrinsically tied to changes in social relations, power dynamics, and sociocultural complexity.

Foundational Advances and Discoveries

Game-changing advances in understanding hunter-gatherer economies hinge on theory, modeling, and empirical discoveries. The main theoretical advances are those found in the theory of cultural ecology, economic and evolutionary modeling, and applications of behavioral ecology to questions of human subsistence and subsistence related behaviors. Bridging the gap between theory and empiricism are discoveries related to the development of modern human economic behavior and the evolution of broad-spectrum diets and low-level food production. In addition to the seminal ethnographic work by people like Richard Lee among the Dobe !Kung, which has already been covered, a sample of empirical observations of groups living in Africa, South America, and Australia in the late 20th century are included here given their import to tracking diversity in hunter-gatherer economies.

Cultural Ecology

Julian Steward was an early proponent of a comparative approach to mapping hunter-gatherer economic diversity; he believed recurrent behavioral patterns found in similar environments were evidence of general ecological adaptations. Steward consequently introduced cultural ecology, a multilinear evolutionary theory designed to explain varied adaptations to different environments (Steward, 1955). Cultural ecology sought to understand how technologies facilitate interaction with local resource structures—the abundance, distribution, and seasonality of targeted foods—to shape other aspects of economic and social life, most notably social structure. Essential in this regard was the culture core, defined as “the constellation of features which are most closely related to subsistence activities and economic arrangements” (Steward, 1955, p. 37). Secondary (or non-core) cultural features distinguish cultures somewhat superficially and include cultural elements that are determined by history and can be transmitted and produced through diffusion or innovation. Perhaps most famously, Steward described “family band” socioeconomic organization—typified by the Great Basin Shoshone and characterized by small, nuclear family groups that are annually mobile and thinly spread on the landscape—as a culture core response to sparse, unpredictable resources procured and processed using relatively sophisticated plant gathering and processing technologies like burden baskets, manos and metates, baskets, seed-beaters, winnowing trays, and baskets (Steward, 1938). Similar technologies used in more productive environments and more complex technologies used in arid environments like those found in the Great Basin should produce different socioeconomic arrangements, as Steward observed elsewhere, including southern Africa, Australia, and the Philippines (Steward, 1955).

Foragers and Collectors

Binford (1980) defined a continuum of “foragers” and “collectors” to explain variability in hunter-gatherer settlement systems and archaeological site formation processes. The model articulates a range of adaptive strategies pursued by mobile groups, with foraging and collecting on either end of the spectrum. Foragers are residentially mobile, a strategy involving moving from place to place frequently and “mapping on” to resources as they become available. Collectors are logistically mobile, a strategy where people are more tethered to residential bases and resource acquisition involves scheduling the exploitation and storage of specific foods obtained by specialized task groups. Collectors (like the Tlingit with their specialized technologies) prepare for an array of activities that will take place at different locations throughout the year, so there is more investment in offsite gear and specialized equipment. Archaeologically, collector strategies are associated with more site types (base camps, temporary camps, locations, caches), home bases tend to have larger assemblages (reflecting longer settlement duration) and contain food refuse brought from distant locations, and artifacts include more curated and specialized tools. Foragers (like the Alyawara with their generalized technologies) tend to be located nearer the equator where seasonal shortages tend to be rare, while collectors tend to be found in more seasonal climates in middle latitudes.

Binford argued that tendencies toward one or another of these patterns were predicted by effective temperature, a proxy for environmental productivity and seasonality. Where effective temperature is high (in the tropics and subtropics) and resources are evenly distributed across space and time, hunter-gatherers tend towards the forager end of the spectrum. Where effective temperature is moderate, resources are unevenly distributed in space and time, leading collectors to collect resources en masse when they are available during productive seasons and storing said resources for when resource productivity is low. The model is a deterministic one, with seasonality defining by and large the nature of hunter-gatherer economic behavior. Its implications, however, are profound in that they suggest that surplus-producing economies evolved as responses to both the spread of people into increasingly seasonal latitudes and the development of more pronounced seasonality in these latitudes during the Holocene.

Travelers and Processors

The traveler-processor model (Bettinger, 1999; Bettinger & Baumhoff, 1982) was developed to explain cultural variation and change in the North American Great Basin. The model follows the logic of optimal foraging theory (patch choice and diet breadth models) derived from behavioral ecology and establishes a typology of adaptive strategies that is superficially similar to the forager-collector model, but differs in that it highlights the competitive fitness of groups by defining specific relationships among population, settlement, and subsistence patterns. In the traveler-processor model, as populations move from low densities (travelers) to high densities (processors), people increasingly rely on more costly-to-process plant foods like seeds, nuts, and tubers.

Processors have a broad spectrum diet and, because processing tasks typically fall to females, women’s labor becomes more valuable, which may lead to higher populations and lower rates of female infanticide. The model explains the rapid replacement of travelers by Numic-speaking processors in the Great Basin: as the Numa engaged more in more intensive plant collection and processing strategies, population densities increased. These groups consequently gained footholds in territories previously occupied by groups practicing traveler strategies, who only exploited a fraction of the biotic productivity that processors did. In this model, Numic travelers simply out-ate and out-reproduced the people they replaced due to the greater amount of calories available to them relative to the pre-existing travelers, spreading from southeastern California across the Great Basin in about the last 1,000 years. The model is important because it was one of the first to use human behavioral and evolutionary ecology—ways of tracking the evolution of human behavior through microeconomic models—to make explicit predictions as to how competition between two different economic strategies might play out over relatively long timespans (Broughton & Cannon, 2010; Smith & Winterhalder, 1992), with important implications regarding how farming adaptations might displace foraging ones as well (Kennett & Winterhalder, 2006).

Anatomically Modern Humans and the Upper Paleolithic

In general, the evolution and spread of anatomically modern humans (AMH) during the Upper Paleolithic (UP) in Eurasia and Late Stone Age (ca. 50–14 kya) in Africa appears associated with fundamental changes in hominid subsistence economies. Earlier, Middle Paleolithic economies affiliated with species like Neanderthals tended to focus on hunting large game and, in the Mediterranean, slow-moving, slow-growing tortoises and some mollusks (Stiner & Munro, 2002; but see Speth, 2004). In contrast, AMH economies tended towards more diversity and were characterized by exploiting more costly to process, faster-moving, faster-maturing, smaller-bodied prey like rabbits and birds as well as small seeds. This is well expressed in the Middle (MSA) and Late Stone Age (LSA) deposits at Eland’s Bay and Ysterfontein Rockshelter on the South African Coast, where MSA deposits dating before 50 kya are dominated by larger shellfish and tortoise (Klein et al., 2004; Steele & Klein, 2005). LSA deposits contain seabirds and smaller shellfish and tortoise remains. Similar patterns are evident at Vale Boi in Portugal, where the UP ca. 30 kya is marked by increased reliance on shellfish and rabbits (Manne & Bicho, 2009; Manne, Cascalhiera, Evora, Marreiros, & Bicho, 2011), a pattern also seen in Greece and Spain (Bicho & Haws, 2008; Cortés-Sánchez et al., 2008). A complementary pattern is found in Yuchanyan Cave in south China, where very late UP (ca. 16 kya) archaeofaunal data show AMH diet including turtles, small mammals, and aquatic birds (Prendergast, Yuan, & Bar-Yosef, 2009). Finally, there is evidence of a shift to harvesting and baking wild cereals like barley (Hordeum spontaneum) at Ohalo II, in Israel, ca. 23 kya (Piperno, Weiss, Holst, & Nadel, 2004).

In sum, AMH subsistence economies appear to be marked by more diverse diets and, on the coast, more marine-based resources than their Neanderthal and other archaic Homo forbearers. These diets focused on smaller-bodied prey like rabbits that yielded fewer calories per unit of time spent pursuing and processing than did earlier diets which were focused on large fauna. Some see this shift as representing a new environmental niche occupied by AMH, one consisting of smaller, more diverse, and more costly resources (Klein, 2008). Others see this change as brought about at least in part by technological innovation or increasing population densities (Tortosa, Bonilla, Ripoll, Valle, & Calatayud, 2002; Steele, 2012). These hypotheses are not mutually exclusive—the shift to a new niche could indeed have been driven by demographic or technological change. What is interesting is the degree to which extracting more calories by exploiting lower-return resources affected overall AMH evolutionary success in light of competition with archaic Homo, the idea being that more calories allowed for the growth of larger populations who, in essence, simply out-ate and out-reproduced the Neanderthals, Denisovans, and other Archaics living across Africa and Eurasia during the Late Pleistocene (Klein, 2001, 2009). The applicability of the traveler-processor model in this regard is telling in that even at this early stage in human economic development, it appears more productive economic systems outcompeted less productive ones.

The Broad Spectrum Revolution

The Broad Spectrum Revolution (BSR) is the term Kent Flannery (1969) used to describe change in hunter-gatherer subsistence practices from narrow (e.g., only large-bodied ungulates) to broad (e.g., including large and small mammals, birds, fish, amphibians, invertebrates, tree nuts, legumes, and grass seeds). This insight was drawn primarily from observations of the late Pleistocene and early Holocene archaeology of the Near East (Braidwood & Howe, 1960; Flannery, 1965; Garrod & Bate, 1937; Hole & Flannery, 1968; Hole, Flannery, & Neely, 1969; Perrot, 1966), and informed by a then-recent recent theoretical contribution from Binford (1968). For Flannery, changes in the resource base of human foragers led to changes in social practices like food storage and gendered division of labor. These set the stage for the domestication of plants and animals, the origins of intensive irrigation, and both the social complexity and environmental deterioration that comes with agricultural economies.

The central logic behind these changes rests on an equilibrium model describing the relationship between human demography and resource availability. In this model, the initial shift to a broad-spectrum diet would not happen in places where narrow-spectrum resources were abundant; rather it would happen on the less-favorable margins of such places. Broad-spectrum diets would therefore enable human groups to live in both kinds of environments without exceeding the limits of the resource base as a whole, and ultimately, exploitation of a broad range of resources would take hold in both places. Likewise, cultivation of wild grasses (for example) would not be necessary in those places where wild grasses were naturally abundant, but would maintain the population-resource equilibrium if practiced “around the periphery of the zone of maximum carrying capacity” (Flannery, 1969, p. 80). Here, Flannery alluded, was where such plants would be domesticated, alongside the continued exploitation of a wide range of other wild resources, as he claimed was the case along the “hilly flanks” surrounding terminal Pleistocene Mesopotamia, where some of the earliest domesticates have been identified.

Superficially, the underlying logic of the BSR is the same as the logic behind the process of resource intensification (sensu Boserup, 1965) in hunter-gatherer economies (Morgan, 2015) and is akin to the types of more costly but higher-output subsistence behaviors predicted by the diet breadth model in the context of increasing human population density and/or environmental change (i.e., where fewer encounters with high-ranking prey due to overhunting or changes in resource density or distribution necessitate a shift to eating more costly, lower ranked resources like small faunas, seeds, and nuts). For most applications, however, the BSR is a description rather than an explanation of hunter-gatherer economic change.

Nevertheless, the notion of the BSR has been a prominent feature of research on late Pleistocene and early Holocene hunter-gatherer adaptations around the world, as well as the early origins of agriculture. Data-oriented archaeologists increasingly identify cases that match the basic descriptions of the BSR, for example in China (Guan et al., 2014; Liu, Duncan, Chen, Liu, & Zhao, 2015; Liu et al., 2010, 2011; Yang et al., 2012) and in the Near East (Stiner, Munro, & Surovell, 2000; Stutz, Munro, & Bar-Oz, 2009; Weiss, Wetterstrom, Nadel, & Bar-Yosef, 2004). More recently, the BSR has re-entered discussions about the merits of theoretical approaches to the evolution of human subsistence (Bird, Bliege Bird, & Codding, 2016; Gremillion, Barton, & Piperno, 2014; Zeder, 2012), and the discussion is heated.

Low-Level Food Production

Often overlooked in discussions about the diversity of hunter-gatherer lifeways is the fact that hunter-gatherers, not farmers or herders, created the domesticated plants and animals that enabled the agricultural revolution. Archaeologists have looked for evidence of this creative process for many years, often identifying transitional phases to connect evidence for an earlier foraging-based subsistence strategy—i.e. one based on “food procurement”—to a later one based on farming and animal husbandry—i.e. “food production” (Binford, 1968; Braidwood, 1952; Childe, 1951; Flannery, 1968). For example, in terminal Pleistocene southwestern Asia, Natufian hunter-gatherers relied largely on wild gazelle, tree nuts, and intensive collection and perhaps cultivation of arid-adapted wild cereals like rye (Secale spp.). Though more sedentary than their Kebaran predecessors, considerable mobility characterized the Late Natufian during the Younger Dryas (ca. 13–11.5 kya), immediately prior to the earliest evidence for exploitation of domesticated cereals during the Pre-Pottery Neolithic A (PPNA), ca. 11 kya (Makarewicz, 2012). In Mesoamerica, the cereal domesticate was maize (Zea mays), a tropical grass derived from teosinte (Balsas teosinte) some 9,000 years ago or more. Here, however, maize domestication was affiliated with a long period of intensive use of wild plants and what were probably already-domesticated plants like squash (Cucurbita spp.), after the Younger Dryas (Ranere et al., 2009). Both cases were marked by a millennium or more of intensive exploitation of wild plant foods and the development of varying degrees of storage and sedentism, but their evolutionary connections to the complex farming societies that developed thereafter are diverse and remain contentious.

In any event, the very notion of a transitional phase that divides the Paleolithic from the Neolithic (the Epipaleolithic and the Mesolithic for example), reveals that many scholars see agricultural and non-agricultural economies as fundamentally distinct, but that the shift from one to the next ought to be archaeologically visible. This middle-ground between foraging and farming is something that Smith (2001) called “low-level food production.” However, rather than arguing for a transitional phase, he argued that low-level food production was an evolutionarily stable economy in its own right, and like any other archaeologically visible cultural configuration, it might persist for hundreds or even thousands of years.

Typically, archaeologists look at the definitions of food procurement and food production as set by an arbitrary threshold defining the relative importance of domesticated plants or animals to overall subsistence (e.g., 50% for Zvelebil [1996]; 75% for Winterhalder and Kennett [2006]). Others simply look at the entire sequence as a continuum of interactions among humans and other taxa that involved progressive human input and progressive evolutionary change (Ford, 1985; Harris, 1989; Rindos, 1984; Zvelebil, 1996). Smith side-steps both of these issues by defining low-level food production as a long period of successive (and sometimes progressive) change in human subsistence behavior with domesticated plants and/or animals evolving somewhere in the middle.

Low-level food production therefore describes a subsistence system that incorporates a broad array of different resources requiring a broad range of inputs and tactics of exploitation. At once, this vast “middle ground” of human subsistence behavior may include both forager/traveler and collector/processor hunter-gatherers, the BSR, intentional management of wild resources and landscapes, pre-domestication cultivation, incidental domestication, incipient agriculture, various kinds of horticulture, and the many processes of resource intensification. Depending on the scale of research, low-level food production thus provides a conceptual framework for thinking about a broad swath of human behavioral diversity. However, in and of itself it is not a model for understanding or explaining how human groups operate, or how these operations evolve. It is simply an observation that the categories of “forager” and “farmer” or “food procurement” and “food production” are neither as monolithic or dichotomous as some suggest (Hunn & Williams, 1982), nor are they mutually incompatible economic types.

Evidence for low-level food production varies in quality, scale, and scope. Ethnographic observation, oral history, and archaeology all provide examples of people otherwise classified as hunter-gatherers who also cultivate tobacco, seed, and root crops (Deur, 2002; Steward, 1930, 1938; Tushingham & Eerkens, 2016), create and maintain productive aquatic ecosystems (Deur, Dick, Recalma-Clutesi, & Turner, 2015; Whitaker, 2008), and manipulate entire vegetation communities with both fire and mechanical means to enhance the productivity of both food and organic raw materials (Anderson, 1999, 2005; Bird et al., 2016; Lewis, 1973). Increasingly, archaeology points to very early evidence for the exploitation of small-seeded grasses ancestral to eventual plant domesticates, thousands of years prior to any evidence for the morphological attributes of domestication (Weiss et al., 2004; Willcox, 2012). Importantly, it looks as if the various attributes of domestication took hundreds to thousands of years to accumulate, suggesting that in many places the process of domestication was a protracted affair (Fuller, 2007; Purugganan & Fuller, 2011). Likewise, persistent exploitation of wild resources continued long after the initial domestication of plants and animals, with considerable global variation in the relative social, economic, and dietary importance of them, from the Neolithic to the present day.

The notion of low-level food production as a stable adaptive strategy has its critics. First, global ethnographic datasets (Hunn & Williams, 1982; Murdock, 1967) simply do not point to low-level food producing systems in the way that Smith describes them. Subsistence systems tuned to the spatial and temporal availability of wild plants and animals may be incompatible with the demands of cultivation, harvest, processing, and storage associated with food production (Bellwood, 2005; Flannery, 1968). Second, occasional, or sporadic production of high-investment, low-return resources may actually be a feature of instability, rather than stability (Bettinger et al., 2007). Furthermore, sporadic, low-intensity exploitation of wild plants and animals is unlikely to generate the environment of selection required to drive the process of domestication. Rather, the long middle-ground characterized by low-level food production (and the many archaeological “cultures” that mark it) likely marks a period of resource volatility where individual decisions about what to exploit are situational, perhaps reflecting a global reorganization of plant and animal communities from the end of the Pleistocene and the Younger Dryas to the middle Holocene, along with changes in the demography, technology, and social conditions underwriting the human exploitation of them (Richerson et al., 2001). In some cases, these conditions drove the domestication process but in most cases they did not (Bettinger, Barton, & Morgan, 2010). Low-level food production is therefore a useful way to describe a period of volatile social and technological change, but in itself has little explanatory power.

Late 20th Century Ethnographies

Some of the most useful contributions to our understanding of hunter-gatherer economies come from ethnographic work performed by human behavioral ecologists among groups living in Africa, South America, and Australia. This work hinged on seeing hunter-gatherer economies less as monolithic wholes with common goals, but rather as amalgams of different economic behaviors, each with their own incentives posited to make for better adapted, multifaceted group economies. Among the Hadza of Tanzania, for example, Hawkes et al. (1989) investigated the evolutionary origins of senescence among humans (it is rare for animals to live so long after their reproductive capacities have ended) and came to the conclusion that older individuals, especially grandmothers, play a significant role not only in childcare and rearing, but also in group provisioning, which generates more calories for the group as a whole. The implication is that the division of labor and increased economic output made possible by older members of a group are important parts of the evolution of some of the most basic attributes of human economic and social activity. Similar observations have been made among the Meriam of the Torres Strait and the Martu of northwestern Australia, where several researchers have found that the less risky (but also lower-return) foraging and hunting activities of women and children help underwrite the more risky (but also higher reward) hunting activities usually performed by men.

Here again, differential foraging goals and the division of labor along age and gender lines underwrite the overall economic success of the group (Bird & Bliege Bird, 1997, 2000). Alternative foraging goals are also seen among the Aché of Paraguay, where men’s hunting decisions have been linked more to their garnering of prestige as successful hunters than to their immediate personal or familial caloric gain (Hawkes, 1991). A similar pattern has been found among the sea turtle hunting Meriam Islanders (Smith & Bliege Bird, 2000). The key here is that prestige-seeking behavior may confer greater access to mates for successful hunters, help cement long-term group social obligations, and also occasionally provide very high return resources which can free up other, less risk-seeking individuals like children, older males, and females to generate the bulk of the calories consumed by the group. This type of research, of which the above represents only a small part, is intriguing because it shows how different and what appear to be counterintuitive individual economic choices can articulate to result in better-adapted group economies. If true, this points to how the evolution of non-kin based economies developed in human societies, a factor which was critical to the development of the much larger, articulated economies we see in agricultural, urban, and modern human societies across the Holocene and into the present day.

The Contemporary State of Hunter-Gatherer Research

Much contemporary work among hunter-gatherers is activist in nature: geared towards recognizing indigenous rights and helping such groups cope with contact and articulation with the global political economy. This work runs the gamut from establishing the first regular contact with disenfranchised groups like the Mascho Piro in Amazonia, who face increasing incursion into their traditional sphere by loggers and miners, to helping the whale hunting Inupiaq of northwestern Alaska cope with rising sea levels, the costs and benefits of participating in the region’s oil economy, and the constraints of federal and global prohibitions on certain subsistence resources. This type of work looks more at cultural preservation in the face of articulation with global, capitalist economies than with description or analysis of these economies. But there are several places where hunter-gatherer economies have persisted or even reversed course from complete articulation with the global scene. What follows provides a few examples of recent work while also drawing attention to examples of some of the more salient contemporary archaeological work on hunter-gatherer economies, where the opportunity for original hunter-gatherer economic research is considerably more varied.

Siberian Hunter-Gatherers in the Post-Soviet Era

One of the more interesting late 20th and early 21st-century developments in hunter-gatherer economic research consist of changes wrought on indigenous economies like those of the Giliak and the Orochen Evenkis in Siberia by the collapse of the Soviet Union in 1991. Hunting, herding, and fishing groups like these were incorporated into the Russian Empire in the late 19th and early 20th centuries, paying taxes often in the form of skins or pelts, which integrated their hunter-gatherer and herding economies with the larger Russian and world economy. With the rise of the Soviet Union during the 20th century, traditional lifeways and economic activities suffered at the expense of collectivization and subsidization by the Soviet state apparatus. But after the Soviet collapse in 1991, state subsidies disappeared and small groups of Orochen Evenki living in the Baikal region turned to traditional reindeer herding and wild animal hunting not only to subsidize their income, but also as focal economic pursuits. Anthropologists like Anderson (1991, 2006) track the economic and social changes undergone by these people, noting the resilience of their traditional use of space and social structure to external change and how the resurgence of their hunting economy has become a viable economic pursuit. Critical to the subject at hand is that hunting wild animals forms a significant part of the Evenki subsistence base and that the return to this economic activity came after incorporation into state-level and state-sponsored economies largely reliant on agriculture, throwing once again the notion of unilinear evolutionary trajectories out the door at the dawn of the 21st century.

South American Ethnography and Archaeology

Contemporary studies of hunter-gatherer economies in South America fall into three groups. Where the foraging lifeway was fully (or nearly so) supplanted by food production and domestication (e.g., in highland Peru), focus is on the timing, causes, and nature of the transition and how it changed facets of social and political life (Dillehay, 2011). Where hunting and gathering persisted as the dominant economy until contact with Europeans (e.g., Patagonia), attention centers on human contributions to Pleistocene megafaunal extinctions and more general foraging dynamics and their change through time (Borrero, 2013). Critical in this regard is the effect of climate change on hunter-gatherer adaptation, where some identify depopulation and abandonment, but also more intensive use of favorable locales brought about by middle Holocene warming (Garvey, 2012; Yacobaccio & Morales, 2005), substantial settlement pattern shifts during the Medieval Climatic Anomaly (ca. 1.1–0.6 kya) (Morales et al., 2009), and evidence of a shift away from domesticates (and towards consumption of wild species) during the Little Ice Age, 600–150 years ago (Gil et al., 2014). Lastly, where relatively small groups survive currently on wild resources or did until the very recent past (e.g., the Aché), ethnographic studies center on the microeconomics of hunting and gathering (Janssen & Hill, 2014). The contributions of researchers working among the Aché have already been described, but the import of these studies cannot be understated given the microeconomic data this research records and the utility of such quantitative measures to understanding the economics behind subsistence choices, differential foraging goals, the division of labor along age and gender lines, and the way cooperation develops among kin and non-kin alike.

North American Archaeology

Though hunter-gatherers inhabited the continent for at least seven millennia before the initial domestication of plants in what is now the southeastern United States and for eleven millennia before the wholesale adoption of the Mesoamerican domestic triumvirate of maize, beans, and squash in the American Southwest and eastern United States, most 21st century hunter-gatherer oriented archaeological research focuses on those areas where hunter-gatherers were still on the scene when European explorers arrived in the 16th and 17th centuries: the west coast and the arid regions west of the 100th Meridian.

Most of this research is ecologically-focused and situated to explain the development of more intensive, higher-yield economies. In coastal Texas, for example, Hard and Katzenberg (2011) use isotopic data on human skeletal material to argue that there was a switch to eating low-return plant foods there beginning around 2.5 kya, which Johnson and Hard (2008) attribute to demographic increase and population packing—to the point that population densities approached those of maize-based agriculturalists, which may explain why agriculture was never adopted in the region: hunter-gatherer economies were just as or more productive than agricultural ones. Conversely, on the California coast, researchers like Erlandson et al. (2009) document shifts to eating large-bodied pelagic fish in the late Holocene, which they attribute to the invention or adoption of sophisticated technologies like canoes, nets, and fishhooks which facilitated the capture of large-bodied, high-return prey. Here, improvements in technology led to increased economic efficiency and eating higher on the food chain, in contrast to the patterns seen in coastal Texas and across much of arid western North America.

There has also been a resurgence in looking at the way hunter-gatherers modify their environment to increase the yield of desirable food and other resources by burning, in a vein similar to that seen in Australia (Anderson & Rosenthal, 2015; Cuthrell, Striplen, Hylkema, & Lightfoot, 2012; Lightfoot et al., 2013). Seen in archaeological (Cuthrell, 2013) and paleoecological datasets (Klimaszewski-Patterson & Mensing, 2016) is convincing evidence that native Californians modified the biota of the state to such an extent that the historical California landscape might be seen as more the result of cultural than natural processes. This is important because it shows the degree to which non-farming societies modify the landscape to construct ecological niches conducive to supporting large, semi-sedentary hunter-gatherer populations (Broughton, Cannon, & Bartelink, 2010). The extent to which hunter-gatherers shaped the evolution of global environmental systems, and indeed their own biocultural evolution, through various forms of landscape modification and management over their entire history, is an emerging and important body of research worldwide (Archibald, Staver, & Levin, 2012; Bird et al., 2016; Boivin et al., 2016).

The Future of Hunter-Gatherer Studies

Given the proliferation of the global economy and the spread of logging, mining, and other developments into the few lands still occupied by people who are largely reliant on wild resources, it seems clear that future study of living human foragers will be limited but will also require considerable creativity reliant on the occasional ethnography, perhaps geared more towards understanding how and why people supplement their economic output with wild resources in the face of economic hardship. Archaeology holds more promise, in part because of the geographic and temporal scope of materials available to archaeologists but also because archaeology retains the ability to identify hunter-gatherer behavioral diversity without necessarily relying on ethnographic analogy.

So what is left to learn about hunter-gatherers through observation of contemporary people? There are of course a few groups of people still identifiable as hunter-gatherers (e.g., the Hadza, Raute, Aché, and Martu) and it does not matter if they have always lived as foragers or if they have returned to a foraging life after exclusion from some other economic strategy, as is the case with the Orrorin Evenkis, the Mikea, and arguably, the !Kung San (Anderson, 2006; Tucker, 2002; Wilmsen, 1989). The opportunity to learn how people manage to provision themselves without agricultural products, market exchange, or state support exists, but the range of opportunities is smaller than it was 100, 50, or even 20 years ago. Methods of study, therefore, must be creative.

In addition to those few remaining cases where foraging is “traditional,” there are also opportunities to learn from people living in the interstices of contemporary life. Much in the same way that some people carve out opportunities for low-level food production in urban environments (Balmori & Morton, 1993), others manage to make ends meet through collecting and hunting, particularly during periods of economic stress. In the future, opportunities for understanding how hunting and gathering actually work may come from the demand for food in cases of extreme economic hardship or sociopolitical instability. Though somewhat nontraditional from an anthropological perspective, if the objectives of studying hunter-gatherers are aimed at understanding how individuals and groups function and evolve in the absence of state bureaucratic structure, capitalist economic structure, or domesticated plants and animals, it doesn’t really matter where the insight comes from.

Many of these studies will no doubt focus less on materialistic concerns and more on questions of human rights and the abuse of them. Indigeneity, what it means, and how it shapes the global political landscape, will feature prominently in these discussions. A growing number of contemporary indigenous communities record their own economic data, often referred to as Traditional Ecological Knowledge, with and without anthropologists. Many tribal communities, particularly in North America, have large cultural programs with archaeologists on staff, and there is an ongoing “first foods” movement, with health programs often explicitly promoting traditional foods in the diet of native communities. This shared interest in subsistence and diet can link archaeology and indigenous communities in a way that has not yet been fully realized. Furthermore, the effects of commercial, economic, and industrial development on traditional life will also be of interest. The extent to which these interests will help us understand hunter-gatherer economies is an open question. In all of this will still be opportunities to understand the social impacts of technological change, and indeed, the nature of technological change itself. Likewise, these conditions engender opportunities to learn more about the intergenerational transfer of information, wealth, and potential. Further, they also may afford opportunities to learn about the nature of conflict that comes with intergenerational difference in ethics, language, and practice.

Beyond this, archaeology is clearly poised to reveal more about hunter-gatherer economies than any other source of information: the timespan under consideration extends well into the Pleistocene and the geographic scope is worldwide. But are we prepared to let archaeology be the sole source of information about hunter-gatherer life? How will the social sciences deal with misunderstandings about our “Pleistocene predisposition” (Eaton, Cordain, & Lindeberg, 2002; Eaton, Shostak, & Konner, 1988) and the nature of the “environment of evolutionary adaptedness” (Cosmides & Tooby, 1987). How will archaeologists deal with the limits of their tools to interpret and understand the proxy evidence for past behavior? Likewise, how will archaeologists improve their methods for analyzing broad patterns of archaeological data that attest to the diversity of hunter-gatherer lifeways? And how can they do this while protecting cultural heritage? Finally, what will be the role of descendant communities in global archaeological work?

As previous sections make clear, however, study of hunter-gatherer economies requires and will in the future require more collaboration with a variety of disciplines beyond anthropology. Interpretations of prehistoric foraging economies and their evolution rely on detailed paleoclimatological and paleoenvironmental data to identify the opportunities and challenges associated with past environments. Ethnobotanical and genetic research provides crucial information pertaining to the availability, edibility, and domesticability of plants. To the extent that foraging behaviors are influenced by conspecific competition and/or social learning, demography is an important line of evidence. Studies of modern diet and nutrition can inform model building, allowing human behavioral ecologists to incorporate things like nutrient bioavailability and complementarity, for example. Beyond understanding the physical aspects of foraging economics, anthropologists increasingly recognize the importance of psychology and cognitive neurobiology for understanding how human motivations, decision-making, teaching, and learning affect foraging behaviors.

In sum, future research on hunter-gatherer economies will likely be largely archaeological but will also be strengthened by creative approaches to studying extant peoples living on the periphery of the global market economy and state apparatuses. The work will clearly be interdisciplinary, falling as it does between the approaches of the physical and social sciences and the more humanities-oriented research of some cultural and activist anthropologies. Beyond data acquisition and access to study populations, however, the real challenge will be to recognize and try to operate outside of the constraints of whatever trope is currently in vogue with regard to how humans interact with and extract economic benefit from wild resources. As the history of hunter-gatherer research shows, how hunter-gatherers and their economies are conceptualized—whether brutish, noble, primitive, adapted, affluent, simple, or complex–often tell us more about ourselves and our current sociopolitical and ideological milieu than about the foraging societies themselves. Doing so requires, first and foremost, that the object under study be human economic behavior in all its forms. Put another way, understanding how groups in the past solved problems of resource acquisition, storage, and distribution generates greater understanding of human economic behavior as a whole; whether it is based on wild or domestic products is to some extent irrelevant.


There are no easy answers regarding exactly what hunter-gatherers are or are not or what their evolutionary relationships are with agriculturalists. While there can be no question that hunting, fishing, and gathering were the earliest modes of human economic production and that hunting and gathering preceded and ultimately gave rise to agriculture, these are superficial and facile generalizations. Difficulty making more nuanced generalizations stems from the fact that until relatively recently (in evolutionary terms), hunting and gathering was the only mode of human economic production, whether it be found in the tropics, the coast, the desert, in the steppes, temperate zones, or nearer the poles. Add to this the fact that hunting and gathering is older than our species and persisted in this multiplicity of environments for some 200,000 years after the evolution of Homo sapiens and it is clear that the opportunity to develop diverse lifeways centered on wild resources with no ethnographic or historical analog was indeed considerable. Given this, it bears repeating that hunting and gathering is merely an economic orientation geared towards exploiting wild plants and animals that subsumes degrees of diversity in population density, technology, social structure, and ideology that are comparable to or even exceed those associated with other modes of economic production.

When it comes to the relationship of hunting and gathering to agriculture, it is worth considering the types of behaviors agriculture actually requires. Agriculture necessitates bulk resource acquisition, generation of surplus, storage, considerable labor inputs, almost certainly the division of labor, specialized technologies, environmental manipulation, and most likely a reorientation of social norms geared towards recognizing private property, if only to allow for the incentive to store costly, bulk-acquired domesticated crops (Bettinger, 2006). It is abundantly clear that all of these behaviors are found in one form or another among what many researchers term “complex” hunter-gatherer societies, from the ethnographically-documented sedentary and storing societies of coastal western North America, to the Mesolithic societies of Europe and the Jomon in Japan in the early Holocene, to the Natufian hunter-gatherers of southwest Asia during the terminal Pleistocene. It may even be the case that an alternative strategy of broad spectrum, low-level food production may represent an evolutionary pathway outside of the forager-farmer continuum. Given this, it is clear that most if not all of the technological, social, and even ideological behaviors associated with agriculture were present in hunting and gathering societies before the development of economic modes of production centered on domesticates. It is also clear that engaging in these types of behaviors does not necessarily mean that agriculture will eventually develop out of them.

This suggests that, at least in its incipient state, agriculture was not very different than intensive hunting and gathering, the only real difference being the degree to which artificial selection had altered the wild plants and animals that early agriculturalists exploited. It is indeed entirely possible that hunter-gatherers through the Late Pleistocene and early Holocene experimented with deliberate planting, with most of these experiments ultimately failing. During the Pleistocene this appears due mainly to climatic variably and low CO2 in the atmosphere. During the Holocene this failure was likely due in large part to the failure of social relations of production to develop that would offset the freeloader problem found in almost all small-scale hunter-gatherer societies. The future of hunter-gatherer studies, though faced by many challenges, consequently stands to shed additional light on the origins of human behavioral diversity, and how aspects of this diversity eventually resulted in the requisite mix of the technologies, labor practices, and social norms needed to make agriculture not only work, but also to outcompete hunter-gatherer modes of production over the long haul.

Hunter-Gatherer Life / Hugh Brody

Unfortunately, I don’t have any of Hugh Brody’s books. Here are two article excerpts.

Life As A Hunter-Gatherer

Hugh Brody is a writer, anthropologist and filmmaker. From his experiences of hunter-gatherer culture gleaned from years of living and hunting with the Inuits of the Arctic and the salmon-fishing tribes in the Canadian Northwest, Brody reaches through everyday realities to reflect on the human condition.

Speaking to Outlook about his latest book, The Other Side Of Eden, Brody introduces us to the hunter-gatherer way of life and explores the misunderstandings and the historic division between hunter-gatherers and farmers.

Hunter-gatherers have always had a bad press. We think of them as primitive, whilst farmers are perceived as a definite step forward in human progress.

Having spent a great deal of his life living with hunter-gatherers Brody has not only observed, but he has attempted to absorb and truly understand people’s relationship with the land and is quick to dispel the myth that hunter-gatherers are uncivilised. He comments:

‘The thing about being with the Inuit is that you have a sense of being with the most gracious, most generous, most sophisticated of human beings. So far from being simple, they are very, very rich and complex.’

Having spent time with the Inuit in the early 1970s and ’80s, Brody was privy to great opportunities. He travelled with dog teams, ventured in the snow and even lived in snow houses. Living and working with the Inuit people he reversed the colonial relationship whereby the Inuit’s way of life is considered ignorant, and instead he asked the Inuit to teach him about their ways.

Already fluent in French, German and Hebrew, Brody has also learnt two Inuktitut dialects, and considers language to be the key to understanding cultures. Language he claims ‘reveals different ways of knowing the world.’ He elaborates:

Colonialism constitutes them as ignoramuses (developmentally defective / retarded) – vessels to be filled with the truth. But if you ask them to teach you their language you give them a chance to reverse this – you are the one who doesn’t know anything. Instead of saying ‘seal’ you say ‘penis’ and they all laugh at you.‘ 

The much-quoted fact that Inuit language has 347 words for snow would however surely be a hindrance to learning the language? Not according to Brody:

‘Hunter-gatherer language doesn’t have categories, they don’t have conceptual terms like snow – they have very specific words such as “snow that has recently fallen” or “snow that is falling through the air”, “snow that has been driven in the wind” and it goes on and on. But they are all very specific pieces of information about the environment; they are all translatable and learnable (sic). There is nothing terribly difficult or mysterious about it.’

It seems that ASD Asperger types are not the only humans to use concrete language that is highly detailed and specific, especially in describing objects in the natural environment. This language is the basis of scientific observation. Would psychologists “categorize” the Inuit as “developmentally disordered”? YES.

Settlers and Nomads
To Brody the hunter-gatherer culture is one, which is both respectful of the planet and of its people. He discusses the Inuits relaxed attitude towards child discipline and marvels at their subsistence existence.

However he is aware that these can also be the very qualities that have led others to be dismissive of the hunter-gatherer way of life. Believing the popular conception of farmers as the settlers and hunter-gatherers as nomads to be untrue, he comments:

‘We have this idea that farmers are deeply settled in their places, whereas hunter-gatherers are roaming around like the beast of the fields …Integral to the story of farming is people going out on the land and colonising it.’

‘Colonising, frontiers and new settlements are absolutely at the heart of the story of agriculture. Whereas in fact hunter-gatherers are completely committed to one place because their success depends on their knowledge of the one place and their knowledge is not transferable.’

The Demise of the Hunter-Gatherer
The idea that farming is associated with a quest for more land has led Brody to theorise the demise of the hunter-gatherer. Central to the Inuit culture is a conviction that their land is ‘Eden and exile must be avoided’, if this is the case why do they struggle to control their traditional territories? Brody explains:

‘More farms lead to more people, and more people lead to more farms and so you get a cycle which causes drastic population expansion and these people go to the land of the hunter-gatherers. On the whole history shows that the hunter-gatherer was driven out or completely absorbed into the farming world.’

‘… have been driven out by the aggression and the very success of farming. Farming is a very brilliant device for accumulating surplus food and for having lots of children.(overpopulation and its dire consequences)

Where farming is not possible, hunter-gathering communities continue to exist. Those who survive struggle to maintain their identity and, with more and more children attending English speaking schools, they fear that they will lose their language. Brody explains:

‘Hunter-gatherers around the world talk most intently about loss of language. To know the language is to have the stories about the place and have the detailed knowledge …To lose it is to lose your own claim to the land. To lose your connection and therefore to lose your links to the past and your links to the future.’


Are any other ASD / Aspergers thinking, “Yeah, that’s me!” regarding lifestyle preferences of hunter-gatherers and the “dimwitted” social typical misunderstanding of almost everything we do?  


Brody excerpt…
Maps of Dreams /  Hugh Brody (bio)

The rivers of northeast British Columbia are at their most splendid in the early fall. The northern tributaries of the Peace achieve an extraordinary beauty; they, and their small feeder creeks and streams, are cold yet warm—perfect reflections of autumn. The banks are multi-colored and finely textured; clear water runs in smooth, shallow channels. The low water of late summer reveals gravel and sand beaches, textures and colors that are at other times of the year concealed. Such low water levels mean that all these streams are easily crossed, and so become the throughways along the valleys that have always been at the heart of the Indians’ use of the land. In October those who know these creeks can find corners, holes, back eddies where rainbow trout and Dolly Varden abound.

The hunter of moose, deer, caribou (and in historic times, buffalo) does not pursue these large animals without regard to more abundant and predictable, if less satisfying, sources of food. The man who tracks and snares game, and whose success depends on his constant movement, cannot afford to fail for much more than two days running. On the third day of hunger he will find it hard to walk far or fast enough: hunger reduces the efficiency of the hunt. Hunger is inimical to effective hunting on foot; yet continuance of the hunt was, not long ago, the only means to avoid hunger. This potential source of insecurity for a hunter is resolved by his ability to combine two kinds of hunting: he pursues large ungulates in areas and with movements that bring him close to locations where he knows rabbits, grouse, or fish are to be found. These are security, but not staples. Hunting for large animals is the most efficient, the most rational activity for anyone who lives in the boreal forest. But such a hunter would be foolhardy indeed to hunt for the larger animals without a careful and strategic eye on the availability of the smaller ones.

In October, only a month after Joseph Patsah [elder of the Beaver clan] and his family first spoke to us about their lives, they suggested that I go hunting with them—and, of course, fishing. By now the rainbow trout would surely be plentiful and fat. Joseph said that he also hoped we could go far enough to see the cross [a medicine cross carved into a tree]. One evening, then, he proposed that we should all set out the next day for Blue-stone Creek.

Between a proposal to go hunting and actual departure there is a large and perplexing divide. In the white man’s world, whether urban or rural, after such a proposal there would be plans and planning; conversation about timing and practical details would also help to build enthusiasm. In Joseph’s household, in all the Indian households of northeast British Columbia, and perhaps among hunters generally, planning is so muted as to seem nonexistent. Maybe it is better understood by a very different name, which is still to suppose that planning of some kind does in fact take place.

Protests against the hunting way of life have often paid hostile attention to its seemingly haphazard, irrational, and improvident nature. Before the mind’s eye of agricultural or industrial man loom the twin spectres of hunger and homelessness, whose fearsome imminence is escaped only in the bright sunlight of planning. Planners consider many possibilities, weigh methods, review timing, and at least seek to deduce what is best. To this end they advocate reason and temperance, and, most important, they are thrifty and save. These ideas and dispositions, elevated to an ideal in the economics of nineteenth-century and secular Puritanism, live on in the reaction of industrial society to hunters—and in the average Canadian’s reaction to Indians. And a reaction of this kind means that a person, even if inclined to be sympathetic to hunters and hunting, has immense difficulty in understanding what planning means for hunters of the North.

Joseph and his family float possibilities. “Maybe we should go to Copper Creek. Bet you lots of moose up there…


Neotenic Neurotypical disrespect for cultural meaning…it’s all about desecration by appropriation of authentic human creativity. Reduction of meaningful objects to crass novelty items is pornographic. Typical lack of “empathy” or depth of feeling in modern humans.   

The U.K.-based fashion label KTZ’s fall 2015 men’s collection includes a number of garments based on traditional Inuit designs and a sweater that appears to be a replica of a shaman’s jacket, which a Nunavut woman says was used without her family’s consent.

“I was in shock, I was furious, I was angry,” said Salome Awa, who works as a morning show producer at CBC Nunavut.

“This is my great-grandfather’s sacred garment copied right down to the tee.”

Variations in Hunter Gatherer Sleep Patterns / Sentinel Behavior

Implications for Asperger types? See posts about Asperger visual thinking, brain organization and ‘socially odd’ behavior as a legacy of pre-domesticated hunter-gatherer humans. 
Proc Biol Sci. 2017 Jul 12;284(1858). pii: 20170967. doi: 10.1098/rspb.2017.0967.

Chronotype variation drives night-time sentinel-like behaviour in hunter-gatherers.


Sleep is essential for survival, yet it also represents a time of extreme vulnerability to predation, hostile conspecifics and environmental dangers. To reduce the risks of sleeping, the sentinel hypothesis proposes that group-living animals share the task of vigilance during sleep, with some individuals sleeping while others are awake. To investigate sentinel-like behaviour in sleeping humans, we investigated activity patterns at night among Hadza hunter-gatherers of Tanzania. Using actigraphy, we discovered that all subjects were simultaneously scored as asleep for only 18 min in total over 20 days of observation, with a median of eight individuals awake throughout the night-time period; thus, one or more individuals was awake (or in light stages of sleep) during 99.8% of sampled epochs between when the first person went to sleep and the last person awoke. We show that this asynchrony in activity levels is produced by chronotype variation, and that chronotype covaries with age. Thus, asynchronous periods of wakefulness provide an opportunity for vigilance when sleeping in groups. We propose that throughout human evolution, sleeping groups composed of mixed age classes provided a form of vigilance.

Chronotype variation and human sleep architecture (including nocturnal awakenings) in modern populations may therefore represent a legacy of natural selection acting in the past to reduce the dangers of sleep.

PMC5524507 [Available on 2018-07-12]


Looks like some “night shift” sentinels in  HG cultures  were also doing astronomy…

The discovery of a 10,000-year-old lunar calendar in (Warren Field) Scotland has archaeologists scrambling to rethink the beginnings of history. The calendar itself is primitive. However, it’s also the oldest calendar ever discovered predating the bronze calendar in Mesopotamia that had held that title until now by several millennia. The array is made up of 12 pits, one for each month of the year, arranged in a 160-foot-long arc and topped with a series of stones thought to represent the phases of the moon. The full moon stone is prominently displayed in the middle, and on the far side is a notch to show where the sun would rise on the midwinter solstice 10,000 years ago.