Sick Building Syndrome and ASD / Sensory Sensitivity

Regarding the supposedly “developmentally defective” state of ASD – Asperger individuals as “over-sensitive” to the environment: The faulty assumption is made that “typical, normal, typically developing” humans are not affected by, or damaged by, manmade environments. This is preposterous –

The list of chemical pollutants below encompasses only those substances common in buildings: one cannot escape these dangers by retreating to the outdoors. All environments on the planet have been altered by human activity. This list also does not include overcrowding, industrial accidents, destruction of natural environments, extinction of plants and animals necessary to healthy systems, lack of clean water, nutritious food and the effects of processed foods. And those basic toxic social activities: war, violence and abuse of every imaginable type wherever hyper-social humans dominate the environment.

The Environmental Illness Resource: Mission Statement

http://www.ei-resource.org/illness-information/related-conditions/sick-building-syndrome/

“The Environmental Illness Resource seeks to provide those with environmental illnesses with information of the highest quality in the hope that this will lead to improved quality of life and perhaps even recovery of good health. In addition, to provide a free and open online community in which members may exchange information between themselves and support each other in their healing journeys.

Chemical Pollutants:

Combustion Pollutants

Various chemical pollutants that can affect the health of a building’s occupants are produced when heating systems or gas fired appliances such as stoves are poorly maintained, and thus don’t burn fuel efficiently, or don’t vent exhaust correctly.

The main pollutants from this source are:

Carbon Monoxide (CO) – a gaseous asphyxiant, CO is known as the ‘silent killer’ as it is colourless and odourless. When it is breathed in CO binds to red blood cells preventing them from carrying oxygen and essentially suffocating the victim. Methylene Chloride may also breakdown to form Carbon Monoxide as well. Methylene Chloride is a common toxic solvent used in many products such as paint and paint strippers.  Sulphur Dioxide (SO2) – is a colourless gas with a strong odour like that of a struck match. Sulphur dioxide is an irritant to the respiratory system and exposure to high concentrations for short periods of time can constrict the blood vessels in the lungs and increase mucous flow, making breathing difficult. Those most at risk from these effects include children, the elderly, those with chronic lung disease, and asthmatics. Other harmful effects of SO2 include it’s ability to impair the respiratory system’s defenses against foreign particles and bacteria when chronically exposed to low concentrations, and enhance the harmful effects of ozone.
Nitrogen Dioxide (NO2) – is another toxic gas produced from combustion of fuels. It can be fatal in high concentrations, whilst lower levels, like SO2, act as irritants to lung tissue. Long term low level exposure can destroy lung tissue and lead to emphysema. Long term exposure also makes people more susceptible to respiratory infections such as pneumonia and influenza. The risk of ill-effect is greatest for the same groups most affected by SO2.

Volatile Organic Compounds (VOCs)

Volatile organic compounds are organic (carbon-based) compounds that evaporate at ambient temperatures within a building. VOCs can ‘offgas’ from building materials and much of the contents of most buildings. These compounds often have effects on health from irritating the eyes, nose, and throat, to causing breathing difficulties, to increasing the risk of developing cancer. An example of a VOC commonly present in indoor air is formaldehyde, which is also one of the most toxic being both a strong respiratory irritant, and carcinogen.
Building Construction – High levels of formaldehyde offgas from particle board. Modern buildings or buildings renovated with modern materials suffer the most from offgassing of VOCs due to the extensive use of particle board rather than solid wood or stone/brick for interior walls etc. Particle board is also often used in place of solid wood in modern furniture such as computer desks and shelving. Although a cheap alternative to other materials, particle board is a major source of VOCs due to the high content of powerful adhesives used in its manufacture. Formaldehyde and other VOCs offgas from particle board used in building construction and furniture for years, with the highest concentrations being generated in the first 6 months.

Carpeting is another major source of VOCs in many buildings since a large number of chemicals are used in their manufacture in the form of glues, backing materials, flame retardants, and dyes. The specific VOCs that offgas from new carpet include acetone, toluene, xylene, formaldehyde, and benzene derivatives. These chemicals are all known to cause irritation, effect breathing, and produce various neurological symptoms. Many of them are also potent carcinogens.

Finishes such as paints and varnishes can also increase the VOC content of a building or room. That fresh paint smell is the result of paints high content of VOCs in the form of solvents and binders. In the case of oil based paints, whose use if thankfully being reduced in indoor paints, the entire base of the paint is made up of VOCs. The US EPA has determined that the off-gassing from architectural coatings is estimated to account for about 9% of the VOC emissions from all consumer and commercial products. Many of the VOCs used in paints have ben banned or are being phased out as they are now recognized to be highly toxic and/or carcinogenic.
Chemicals Used Within A Building – The various chemical based products routinely used inside a building can be an equally large source of VOCs. Products that contain VOCs range from chemical products used to clean a building to marker pens and printer ink, common in an office or school environment.

Cleaning products contain a range of toxic VOCs including diethyl phthalate, found in a range of products, toluene, found in stain removers, and hexane/xylene, found in aerosol sprays. Diethyl Phthalate is a known endocrine disrupter (interferes with hormone activity), toluene is a known carcinogen (cancer causing agent) and can cause neurological problems, and finally both hexane and xylene can also damage the nervous system.

Marker pens are a particularly concentrated source of VOCs as their very strong smell indicates. Their chemical constituents include methyl ethyl ketone (MEK), toluene, and formaldehyde. The VOCs present in marker pens have various consequences for human health including neurological effects. Ink cartridges and toners used in printers also contain VOCs, albeit at less concentrated levels than marker pens.

Electronic equipment also offgases a large amount of VOCs. In an office full of computers, these essential pieces of equipment can be a substantial source of VOCs which offgas from materials such as flame retardants and various other chemicals used in their manufacture.

Besides the above there are many other sources of VOCs within the average office building or other communal building. These include air fresheners, personal care products such as deodorants and perfumes, and laundry detergent and fabric softener residues on the occupants clothing.

For a more detailed look at some of these VOC sources see our multiple chemical sensitivity (MCS) page.

Heavy metals

Although much has been done to reduce or eliminate the use of heavy metals in buildings in over the past few decades, older buildings may still contain a significant amount of these highly toxic substances. Buildings built or extensively renovated after the early 90’s in most developed countries are not likely to have a problem, but many buildings constructed before this time could pose a risk for heavy metal poisoning. The two most common heavy metals present in buildings are lead and mercury.

Indoor paint manufactured before 1990 and outdoor latex paint manufactured before 1991 may contain mercury, which was added to paint mainly to prevent build up of mold on walls, as mercury is an effective antifungal agent. Mercury can damage health in a number of ways, from impairing detoxification to causing serious neurological damage and birth defects. In fact, the mercury containing compound thimerosal was routinely added to vaccines to prevent contamination by fungi and bacteria until concern about its role in causing autism recently lead to its removal. (This does not translate to: vaccines themselves cause autism.) Mercury may also be present in small amounts in computer and electronic equipment.

Lead is another common problem in older buildings because it was also added to paints until a couple of decades ago. Lead-based paint is still a major problem in older buildings particularly when the residues are disturbed and become airborne such as during renovation or construction projects. Like mercury, lead can cause severe neurological damage and a host of other problems.

Unless disturbed by renovation it’s unlikely that heavy metals would be a major contributor to cases of sick building syndrome. For older buildings the risk is there however so must always be considered. (note that poor people are more likely to be chronically exposed to “sick” buildings)

Biological Pollutants

As well as the chemical pollutants described above, various biological contaminants often contribute to cases of sick building syndrome. In fact biological factors are reported to be behind the majority of cases. These biological pollutants can cause illness through three different mechanisms:

  • Infection
  • Allergy/Hypersensitivity
  • Toxicosis – symptoms caused by toxins produced by micro-organisms e.g. mycotoxins produced by mold/fungi

There are many sources of biological pollution that can affect a building and many reasons why a building might become contaminated and cause illness in its occupants.

The following are the main sources of this form of pollution:

Toxic Black Mold – is reported to be the leading cause of sick building syndrome and building related illness. Mold grows rapidly in warm and damp environments. If the indoor environment is too humid or if water damage occurs through leaks or rising damp, mold growth is very likely to occur.
Viruses & Bacteria – are common in every building, especially high occupancy buildings such as offices and schools. These micro-organisms can make a significant contribution to causing SBS. They become increasingly problematic if humidity levels are either too low or too high, as a result of how their growth is affected and the fact that our defenses against them are also affected by humidity levels.
Dust Mites – are highly allergenic and thrive on the constant supply of shed human skin cells that accumulate in carpeting, soft furnishings, and other areas. Like mold and bacteria, dust mites like the warm and relatively humid environment that we usually provide in our buildings.
Pollen – is another allergy causing substance that can accumulate in a building if proper ventilation and filtering is not maintained. Pollens from various trees and plants can be troublesome for a great number of people. Aside from being carried on breezes through open doors or windows, pollens can also be brought indoors on the occupants shoes and clothing.
Insect Body Parts – although not well known are especially potent allergens for some people. Cockroach allergens are particularly troublesome allergens and are commonly implicated as contributors to sick building syndrome. Usually become a problem only when sanitation is poor.

The above are collectively known as bioaerosols. The common definition of a bioaerosol is any extremely small living organism or fragment of living things suspended in the air. They cannot be seen without a magnifying glass or microscope. Of course when a large growth of mold occurs, it does then become visible to the naked eye.

Reasons For a Building Becoming Contaminated by Bioaerosols

Moisture –The primary reason why bioaerosols become a major problem in buildings is the presence of damp in the buildings structure and/or a high level of humidity in the air. There are numerous reasons why such a situation could arise, some of the most common being:

  • Water damage to homes from flooding or storm damage.
  • Leaks in plumbing, roofs, or from air conditioners or HVAC systems.
  • Condensation on central air pipes, HVAC components, or other cool surfaces where insulation may not be present, is insufficient, or has become damaged. Uninsulated air conditioning coils or pipes will “sweat” the most when hot humid air contacts them such as during warm months.
  • Ice damming on building roofs which allows water to seep under shingles and through roof sheathing.
  • Dehumidifiers and humidifiers.
  • Pets
  • Moisture from unvented or poorly vented kitchens and bathrooms.
  • Poor insulation causing drafts or the “chimney effect”.
  • Defective heating and air systems such as clogged condensation drain lines and full drip pans.

Hygiene and Cleaning – Poor sanitary and cleaning practices also contribute to a building becoming contaminated with bioaerosols. In a high occupancy building for example, germs from bathrooms can easily be spread to the rest of the building if they are not cleaned and disinfected both effectively and regularly. People not washing their hands after using the bathroom can also be a big problem.

Another problem is often inadequate or poorly maintained cleaning equipment. A poorly functioning vacuum cleaner for example can do more harm than good by spreading dust around rather than picking it up. As we have heard, dust is a breeding ground for micro-organisms like dust mites that cause allergies in many people. It may also contain other allergens such as pollens that have either blown into the building or been carried in by the occupants. Dust may also harbour disease causing bacteria and other unpleasant organisms. Efficient vacuum cleaners are thus essential pieces of equipment for avoiding a sick building. Models equipped with HEPA filters which remove even the tinniest particles are infinitely preferable.

Going back to chemical pollutants, growing research shows that chemicals, such as flame retardants that are commonly used in electrical equipment and on furniture, accumulate in dust. If a building is not kept free from dust by regular and effective cleaning, the amounts of chemicals present will only increase and pose an ever greater risk for the occupants health.

Other Factors That May Contribute to Sick Building Syndrome

Besides the more obvious chemical and biological pollutants that are commonly present in buildings and can lead to SBS, there are a number of more subtle factors that can also contribute, sometimes significantly. The most common of these are:

Fluorescent Lighting and Electrical Equipment – People commonly report feeling unwell after spending time in buildings lit entirely with fluorescent strip lighting. The flickering light is very harsh and tends to give even otherwise healthy people headaches and make them feel drained. Many people also complain of feeling unwell when they spend time close to computer screens and other electrical equipment. It has been suggested that high frequency electromagnetic fields (EMFs) which are generated by electrical equipment and a building’s wiring can cause a host of unpleasant symptoms such as fatigue, headaches, and inability to concentrate. Electrical Hypersensitivity (EHS) is the term used to describe the condition in which people are made ill by electromagnetic radiation.

Temperature – Although many would dismiss the ambient temperature within a building as a minor consideration, an environment that is either too hot or too cold can have a major effect on how people feel. With extremes of temperature the body has to work hard to maintain its own internal temperature at the right level. With resources focused on this task people can quickly become tired and drained and experience a wide range of symptoms. If the temperature is too hot for prolonged periods for example, people can become dehydrated with potentially serious consequences for their health.

Humidity – again can put a strain on the body as it tries to maintain equilibrium. Like high temperature, a very humid environment can lead to dehydration and associated problems.

Noise – is an equally important factor. Too much noise can be draining and produce headaches and other symptoms. It also makes it hard to concentrate so impacts on the productivity of workers in an office for example.

Bad Office Design/Ergonomics – A badly designed workplace can cause numerous health problems. A cramped office with uncomfortable furniture can result in injuries such as those to the back as well as injuries such as repetitive strain injury (RSI) from repetitive tasks such as typing.

Psychological Stress – is another important consideration in an office building in particular. Stress can be caused by work pressures such as deadlines but also by all of the other factors we’ve discussed here that often relate to a building’s design. Stress is a leading cause of absenteeism as it can result not only in psychological distress but also many physical ailments as well.  In many cases, SBS is a major issue and requires a complete redesign in order to rectify the problem. If an office or room needs to be stripped down and redesigned with new items, then a quick Google search might be in order. Or you could check out the Homeclick Twitter feed for contemporary ideas. In the end, the problem (if not remedied) will eventually worsen, creating an uncomfortable and potentially hazardous workplace.

What Can be Done About Sick Building Syndrome?

If you and other people living or working in the same building experience health problems that seem to only be present when you are in that building, or at least get much worse, then it is reasonable to suspect sick building syndrome. You should report the situation to the landlord, office manger, or whomever is responsible for the building and ask them to have an inspection carried out. If they are unwilling to cooperate then you may have to get local authorities such as an environmental health agency involved.

After a thorough environmental health inspection is carried out on a building to determine possible causes for the occupants health complaints, there are many measures that can be taken to rectify the situation. A combination of some of the factors we’ve discussed above will usually be involved and all will have to be tackled. Measures taken may include an overhaul or replacement of the ventilation system, structural repairs to prevent leaks and damp, a review of chemicals used in the building, a review of cleaning practices, and professional mold removal.

The important thing is to take action to have a suspected sick building investigated as soon as possible as it is likely that the problem will only get worse if not addressed.

Advertisements

Hunter-Gatherer Economies in the Old World and New World

Hunter-Gatherer Economies in the Old World and New World  

Christopher Morgan, Shannon Tushingham, Raven Garvey, Loukas Barton, and Robert Bettinger

March 2017

http://environmentalscience.oxfordre.com/view/10.1093/acrefore/9780199389414.001.0001/acrefore-9780199389414-e-164

See original for figures.

At the global scale, conceptions of hunter-gatherer economies have changed considerably over time and these changes were strongly affected by larger trends in Western history, philosophy, science, and culture. Seen as either “savage” or “noble” at the dawn of the Enlightenment, hunter-gatherers have been regarded as everything from holdovers from a basal level of human development, to affluent, ecologically-informed foragers, and ultimately to this: an extremely diverse economic orientation entailing the fullest scope of human behavioral diversity.

The only thing linking studies of hunter-gatherers over time is consequently simply the definition of the term: people whose economic mode of production centers on wild resources. When hunter-gatherers are considered outside the general realm of their shared subsistence economies, it is clear that their behavioral diversity rivals or exceeds that of other economic orientations. Hunter-gatherer behaviors range in a multivariate continuum from: a focus on mainly large fauna to broad, wild plant-based diets similar to those of agriculturalists; from extremely mobile to sedentary; from relying on simple, generalized technologies to very specialized ones; from egalitarian sharing economies to privatized competitive ones; and from nuclear family or band-level to centralized and hierarchical decision-making. It is clear, however, that hunting and gathering modes of production had to have preceded and thus given rise to agricultural ones.

What research into the development of human economies shows is that transitions from one type of hunting and gathering to another, or alternatively to agricultural modes of production, can take many different evolutionary pathways. The important thing to recognize is that behaviors which were essential to the development of agriculture—landscape modification, intensive labor practices, the division of labor and the production, storage, and redistribution of surplus—were present in a range of hunter-gatherer societies beginning at least as early as the Late Pleistocene in Africa, Europe, Asia, and the Americas. Whether these behaviors eventually led to the development of agriculture depended in part on the development of a less variable and CO2-rich climatic regime and atmosphere during the Holocene, but also a change in the social relations of production to allow for hoarding privatized resources. In the 20th and 21st centuries, ethnographic and archaeological research shows that modern and ancient peoples adopt or even revert to hunting and gathering after having engaged in agricultural or industrial pursuits when conditions allow and that macroeconomic perspectives often mask considerable intragroup diversity in economic decision making: the pursuits and goals of women versus men and young versus old within groups are often quite different or even at odds with one another, but often articulate to form cohesive and adaptive economic wholes. The future of hunter-gatherer research will be tested by the continued decline in traditional hunting and gathering but will also benefit from observation of people who revert to or supplement their income with wild resources. It will also draw heavily from archaeology, which holds considerable potential to document and explain the full range of human behavioral diversity, hunter-gatherer or otherwise, over the longest of timeframes and the broadest geographic scope.

In the strictest sense, the term “hunter-gatherer” simply refers to people entirely dependent on and only interacting with wild plants and animals. The definition is therefore an inherently economic one, with subsistence regime determining whether a given group is subsumed within this overarching anthropological type based solely on whether the group in question derives its sustenance from fishing, foraging, or hunting wild plants and animals. In the broadest sense, then, hunter-gatherers are people whose basic patterns of life—where they live, who they live with, and both their daily routines and the seasonal variation in those routines—are best explained by their connection to the pursuit and consumption of wild species. They are consequently identified as much by what they consume as by what they do not: in the case of the latter, domesticated plants and animals.

Hunter-gatherers are thus often juxtaposed with agriculturalists (including modern societies supported by industrial agriculture) based on perceived fundamental differences between not only their respective economies, but also their technologies, population densities, and degrees and types of sociocultural complexity. While it is clear that the vast and unprecedented numbers of people currently living in the complex, interconnected, and global modern world could not be supported without reliance on domesticated plants and animals, what is much less clear is how hunting and gathering gave rise to agricultural economies and the degree to which hunting and gathering differs in either kind or quantity from economies relying mainly on domesticates.

Within this context, what this chapter presents is fourfold. First, how hunter-gatherer economies have been thought about over time has been conditioned largely by historical circumstance and by changes in the social sciences more broadly. Second, while the drivers of change in hunter-gatherer economies are often linked to changes in climate, environment, and demography, the way these changes play out is often determined by culture as well—kinship, social norms, power relationships, and the like. Third, there is remarkable diversity in hunter-gatherer economies and lifeways, past and present, and this diversity is often marked by many of the characteristics more typically associated with economies reliant on domesticates. Lastly, we show how current and future studies of hunter-gatherer economies hinge on fundamental questions and methods that inform and are informed by intersections with the natural, social, behavioral, and cognitive sciences.

Changing Conceptions of Hunter-Gatherers

While views on hunter-gatherers have changed considerably over time, theoretical approaches to hunter-gatherer economies and lifeways tend to be materialist, focused on the physical conditions faced by hunter-gatherer groups (e.g., environment, climate, and technology). Some of the earliest ideas about hunter-gatherers, for example, emerged during the European age of exploration, when explorers, traders, and colonists encountered indigenous peoples, many of them hunter-gatherers, of which many had already been shunted to marginal environmental settings by the time the first historical descriptions were made of the way they lived (Figure 1). These early accounts cast hunter-gatherers as primitive, disadvantaged, and culturally backwards people who led meager and pitiable lives, where the fear of starvation and death was constant in a life that was, as Hobbes (1962, p. 100) framed it in 1651, “solitary, poor, nasty, brutish, and short.”

In contrast, in 1672 Dryden (1978, p. 30) coined the term “noble savage” in his play The Conquest of Granada to describe an initial, free, and unencumbered state of human existence, a perspective used by Jean Jacques Rousseau, Michel de Montaigne and other Enlightenment thinkers to draw contrasts between “civilized” Europe and the “savage” peoples of, for instance, the Americas. During the European age of exploration and colonial expansion, the view of hunter-gatherers as primitives who represented a basal state of human socioeconomic and technological development became firmly entrenched in philosophical and scientific works.

Figure 1. Location of hunter-gatherer ethnographic groups (in regular font) and archaeological cultures and sites (in italics) mentioned in the text.

This view colored the progressive social evolutionary theory of the 19th and early 20th centuries as set forth by Spencer (1868), Tylor (1871), and others. From this perspective, cultural evolution progressed one way, from simple (lower and primitive) to more complex (higher and more civilized) forms; for example, from savagery to barbarism to civilization in Lewis Henry Morgan’s (1877) seminal evolutionary scheme. Such unidirectional frameworks explicitly viewed material concerns as alleviated by advances in technology, with technological change marking the shift from one stage to the next.

For example, the emergence of early agriculture (“Middle Barbarism” to Morgan) moved humanity into an evolutionary stage wherein the acquisition of food was much less a concern than it was in the previous, and more primitive, Hobbesian universe. Freed from the perpetual quest for food, people could focus on more advanced social, moral, and religious concerns. In these scenarios, evolution began with hunter-gatherers—the “zero of human society” (Morgan, 1851, pp. 347–348), whose problems centered around food acquisition by individuals with woefully limited intelligence, information, and technology—to a more social world where the problem was not centered on getting food but rather about getting along with one’s neighbors.

During this time, however, British and American anthropological perspectives varied in terms of how they viewed and evaluated hunter-gatherers and their economies. Herbert Spencer, an Englishman, saw evolution as having reached its apogee in Western culture where the problems of social progress were essentially solved; questions about how hunter-gatherers made their living were therefore irrelevant. In contrast, American scholars (e.g., Powell, 1888) were interested in how and why technologies, economies, and social systems changed, thus making native peoples—many of whom lived within the boundaries of the United States—worthy of study. Between 1880 and 1920 the Americanist tradition emphasized surveys of hunter-gatherers in places like the Pacific Northwest Coast, the Great Lakes region, and the arid lands between the Rocky and Sierra Nevada Mountains. John Wesley Powell, a leader in these efforts, found considerable diversity in how the hunter-gatherers of North America lived and made their living (e.g., from settled, sedentary groups like the Kwakiutl who relied on smoked and stored salmon to small, highly mobile groups like the Ute of the North American Great Basin who relied in large part on pine nuts and small seeds).

In an explicit rejection of progressive social evolution, eugenics, and the social Darwinist policies that emerged in the early 1900s, American anthropologists and archaeologists by 1920 began emphasizing culture historical frameworks in their research. Culture history in archaeology, influenced by Boasian cultural relativism (e.g., Boas, 1940)—the idea that each culture was unique and developed along its own particular trajectory—emphasizes description over explanation. In culture-historical schemes, changes in hunter-gatherer economies were seen as the result of historical processes like the migration of people who carried with them different technologies and ways of making a living, or the diffusion of ideas, technologies, and subsistence practices from one area to the other, either of which could change the material culture identified in archaeological and ethnographical studies. Left unexplained was how novel behaviors and technologies developed in the first place.

Beginning in the 1960s, cultural historical and unilineal evolutionary frameworks were challenged by a new and more nuanced evolutionary one (Flannery, 1968; Lee, 1968). Anthropologists began to recognize that hunter-gatherers did not conform to simple evolutionary models and began comparing notes—most famously at the Man the Hunter conference held at the University of Chicago in 1966 (Lee & Devore, 1968). Though a more-or-less unitary view of hunter-gatherers persisted for nearly another decade following this meeting, by the late 1970s a great deal of diversity in hunter-gatherer lifeways had been recognized and the notion that these differences developed along multiple evolutionary pathways was in vogue. No longer viewed as “unevolved,” hunter-gatherers were now widely acknowledged to have rich, complex social and religious lives and were regarded as masters of their environments.

From this perspective, hunter-gatherers were universally adept at creating sophisticated adaptive systems specific to their ecological circumstances. They were advantaged peoples who lived in a state of homeostatic equilibrium and only resorted to agriculture if their way of life was disturbed by colonial or other forces. This of course turned the notion of hunter-gatherers as lowly primitives (and farming as a logical outcome of evolutionary progress) on its head. Now cast as the “original affluent society” (Sahlins, 1968), it seemed that hunter-gatherers were healthier and often worked less than farmers. Lee (1984), for example, showed that the considerable leisure time enjoyed by the Dobe !Kung of the Kalahari desert was due in part to the abundance of mongongo nuts (Schinziophyton rautanenii), though he failed to account for the large amount of work the women did to process those nuts.

Hunter-gatherers, however, were and still are often portrayed as “generalized foragers”: mobile people who live in small groups, have few possessions, store almost nothing, exploit only the seasonal availability of wild food, and lead relatively egalitarian lifestyles. However, as Price and Brown (1985, p. xiii) note, “the traditional dichotomy of forager versus farmer has little significance with regard to the organizational development of society—that means of subsistence do not dictate levels of cultural complexity. Indeed, there are abundant examples of hunter-gatherers who diverge from this generalization, for example the Ainu of Japan, the various groups from the Pacific Northwest Coast of North America, many California Indians, the Colusa of Florida, and people known only through archaeology such as the Jomon of Japan, European Mesolithic groups, and the Natufian of the Levant. Each of these groups were sedentary or semi-sedentary and their economies entailed some degree of surplus economic production, storage, diverse and specialized technologies, and varying degrees of wealth, power, and prestige-based inequality. Many of these groups were marked by high population densities, in some cases rivaling those of sedentary farming communities (Kelly, 2013).

Crisis and Controversy in Understanding Hunter-Gatherer Economies

 

Opinions vary widely regarding such deceptively simple ideas about what hunter-gatherers actually are and what drives change in hunter-gatherer economies. Controversial topics include fundamental issues of definition and identification, what causes diversity among hunter-gatherer groups, and what drives more intensive economic and technological change within economies dependent only on wild food as well as those economies based on a mixture of both wild and domesticated resources.

What Hunter-Gatherer Economies Are, and Are Not

Hunter-gatherers are people who rely upon and only interact with wild plants and animals. In practice, however, there are important exceptions. Most widespread is hunter-gatherer cultivation of non-food plants such as tobacco (Kroeber, 1941), pen-raising of wild animals, for example eagles for plumage (Drucker, 1937), and the near-universal keeping of domestic dogs as pets, packers, pullers, hunters, sentries, or as food (Barton et al., 2009; Larson et al., 2012). In addition, many ethnographic and modern hunter-gatherers living near agriculturalists borrowed and cultivated the crops of their neighbors on a very small scale. The Southern Paiute living just north of the agricultural American Southwest (Kelly, 1964), for example, grew maize as a dietary supplement. But they did not do so to the extent that it had much effect on preexisting patterns of settlement and social aggregation which had developed to facilitate the procurement and storage of the many important wild plants and animals on which these groups had formerly depended entirely.

That hunter-gatherers engaged in so many forms of environmental manipulation shows that their resistance towards fully adopting plant and animal husbandry was not the result of ignorance, as was commonly assumed in the late 19th and early 20th centuries. In contrast to the view of hunting and gathering as a lifeway of limitation and ignorance, the archaeological and ethnographic records provide ample evidence that hunter-gatherers understand the natural world in which they live every bit as well as and arguably better than agriculturalists, lacking which they would surely have rapidly perished. While most hunter-gatherers consistently depend on a fairly restricted suite of plants and animals, they routinely maintain and transmit from generation to generation knowledge regarding many times more plants that might be pressed into service during periods of hardship, those which were poisonous and those which were not, and how the poisonous ones might be processed to remove the toxins that would prove fatal if not removed.

Perhaps more pervasive in the current literature is the opposite view that has hunter-gatherers living in balanced harmony with nature—conserving resources for the benefit of themselves and nature at large. Whether humans were ultimately responsible for large-scale megafaunal extinctions in the late Pleistocene, between 10.5 and 15 kya, in the New World (Martin, 1973), or in Australia between 40 and 60 kya (Johnson, 2006), for example, is contentious, but even those scholars inclined to absolve hunter-gatherers of responsibility do not cite innate conservationism as a reason, most of the evidence suggesting the contrary. North American hunter-gatherers occasionally killed much more than they could use, for example, as is well documented at the Olsen-Chubbuck Bison Kill Site in Colorado, where roughly 9,500 years ago a band of hunter-gatherers, having driven a herd of perhaps 200 of the now-extinct species Bison occidentalis into a steep arroyo, intensively butchered only a fraction and left a significant portion largely or completely untouched (Wheat, 1967).

Conservation certainly did not guide Native groups who participated in the extermination of the buffalo from the American Plains in the early 19th century. Euro-Americans bear greatest responsibility but Native Americans played a part—not thinking it necessary to limit their take of a shrinking resource, reasoning from traditional knowledge that buffalo herd size and reproduction were the result of forces beyond human control (Krech, 1999). Indeed, social customs may work against resource conservation even where hunter-gatherers are aware of the problem. Raven (1990) documents a case in which Torres Strait hunters-gatherers continued to target ever-shrinking populations of turtle and dugong, in large part because this was an essential male rite of passage, a prerequisite to marriage. Neither does a more general, microeconomic view of foraging support the foragers-as-lay-conservationists thesis. A broad diet—one not exclusively focused on the largest-bodied prey, which may be more vulnerable to overhunting due to slower life histories or more conservative reproductive strategies—may simply be a byproduct of rational decision-making motivated by self-preservation; instances of apparent conservation do not make the Plains buffalo or Torres Strait turtles exceptions to a rule.

Acknowledging and Identifying Hunter-Gatherer Diversity

The forgoing highlights the fact that there are marked differences in hunter-gatherer lifeways across both time and space. These differences hinge on issues relating to subsistence, technology, social organization, and environmental change over long timespans.

Subsistence

That hunter-gatherer adaptation revolves around subsistence makes variation in the relative emphasis on hunting, fishing (including the procurement of marine mammals and shellfish), and gathering critical. Wherever edible plants are available in any abundance—generally between 40° N and 40° S latitude—gathering dominated subsistence, a pattern accounting for 42% of hunter-gatherers worldwide in one ethnographic sample (Binford, 2001). As one moves poleward, fishing and hunting become more important, hunting dominating in about 24% of groups worldwide, fishing in the remaining 35%. The plant-dominated pattern, however, developed relatively late, becoming much more pronounced during the Holocene. Plants were always critical, but when human population densities are low relative to available resources, hunting, particularly of large game, typically produces superior rates of return and is favored over plants. As population grows and demand increases, hunter-gatherers will increasingly turn to plants, if they are available, to fish if they are not (Binford, 2001).

The diet breadth model (MacArthur & Pianka, 1966) makes it possible to compare hunter-gatherer standards of living from one group to another despite substantial differences in subsistence economy. This is accomplished by calculating the marginal rate of return below which hunter-gatherers will ignore a resource as being too costly to warrant procurement and processing. This entails calculating the handling time per calorie (kcal) and the amount of time expended per kcal in pursuing, collecting, and processing a resource once it is encountered. Resources are ranked from highest (least handling time) to lowest (most handling time) and added to the diet in that order, starting with the highest, which is always in the diet.

The second ranking resource is added to the diet if its handling time per kcal is less than the time it would take per kcal to search for and locate the first ranked resource (Figure 2). Following this logic, the overall return rate for a hunter-gatherer group cannot be higher than the handling time of the lowest ranked resource in the diet. Analyses from this perspective suggest that despite all outward appearances, hunter-gatherers living in California and dependent on acorn (Quercus, Lithocarpus) (Bettinger, Malhi, & McCarthy, 1997), living in Australia and dependent on seeds (Acacia) (O’Connell & Hawkes, 1981), and the Dobe !Kung living in Africa and dependent on mongongo nuts (Hawkes & O’Connell, 1981; Lee, 1984) were all operating at about the same marginal rate of return, the handling times for all three resources hovering around 750 kcal/hr. At this rate it would take about 10 hours of work a day to feed a family of four consisting of a father consuming 2,500 kcals per day, a mother consuming 2,000 kcals per day, and two children each consuming 1,500 kcals per day.

Figure 2. Comparison of a low-cost, narrow spectrum diet (left) with a higher-cost, broader-spectrum diet (right). Switching strategies to the broader-spectrum diet to include lower-ranked items is predicated on the abundance of the higher ranked item, which determines search time. In the narrow spectrum diet on the left, the overall rate of return is that of Resource 1, the only item in the diet. In the broader-spectrum diet on the right, the overall rate of return is that of Resource 2, the lowest-ranked item included in the diet.

Technology

While hunter-gatherer technology tracks variation in the relative importance of gathering, hunting, and fishing for obvious reasons, there is more to it than that, with the range and severity of seasonal change in precipitation and temperature being particularly important. In tropical environments that are warm year-round, resources are generally available somewhere, making mobility the best response to local resource shortage. As one moves away from the equator toward the poles, temperatures decrease, finally to the point that there are seasons with little or no resources available, a problem that mobility alone will not solve. Here hunter-gatherers must store resources in one season for use in another, which means they must be more efficient at procuring resources in quantity when available, which requires more costly and sophisticated technology. At the same time, storing resources tethers hunter-gatherers to the locations of their stores, reducing mobility (Testart, 1982; but see Morgan, 2012). This produces a generally inverse relationship between hunter-gatherer technological complexity and mobility: mobile hunter-gatherers have fewer and more generalized tools than more sedentary groups that store resources for seasons of shortfall. One observes, on the one hand, very simple and generalized (though highly effective) technology with relatively few tool types among the highly mobile hunter-gatherers of desert Australia like the Alyawara, who stored very little, and on the other hand, the intricately complex and specialized technology of the essentially sedentary Northwest Coast groups like the Tlingit, who relied extensively on stored resources.

Sociopolitical Organization

The vast bulk of ethnographic hunter-gatherers lived in simple bands made of 20–50 individuals (Kelly, 2013). In some cases these consisted of families headed by males related by patrilineal descent (patrilineal bands), in others males unrelated to each other, allied merely by convenience or friendship (bilateral bands), and in some cases much smaller groups consisting of a single nuclear family (family bands) with an assortment of the husband’s or wife’s close relatives who were at the time incapable of living on their own (e.g., an elderly widowed mother or father, unmarried brother or sister, etc.). But much more complex arrangements were possible. In the North American Pacific Northwest, for example, large social groups interacted on the basis of intricate systems of ranking, in theory making it possible to calculate the status of any two individuals relative to each other. While it is tempting to connect Pacific Northwest Coast social complexity with environmental richness (Ames, 1994) and the simpler forms of band organization with lesser resource productivity, it is worth noting that population densities in California rivaled those of hunter-gatherers anywhere, including those on the Pacific Northwest Coast, yet were accompanied by very simple band-like organizations (Bettinger, 2015).

Hunter-Gatherer Adaptation over Long Timespans

There is also substantial variation in hunter-gatherer economies over time. The temporal contrast between Pleistocene (largely hunting-focused, with subsidiary emphases on plants) and Holocene hunter-gatherers (largely plant-focused, predominantly in low and mid-latitude environments) is particularly sharp, for two main reasons. First, the hunting-gathering lifeway is older than our species (Homo sapiens). Accordingly, many behaviors basic to hunter-gatherer adaptation depend on physical capabilities (e.g., capacity for language) that certain of our more ancient predecessors lacked. This alone prevents drawing simple analogies between modern and pre-Homo sapiens hunter-gatherers. Second, Pleistocene and Holocene hunter-gatherers confronted dramatically different environments, the latter much more favorable to plant exploitation. Glacial periods during the Pleistocene, for instance, resulted in substantial decreases in sea level and shifted temperate biomes towards the equator, either of which could have resulted in resource distributions for which there may be no Holocene ecological analog. Pleistocene climate was also wildly variable, marked by repeated cycles of rapid warming during interglacials followed by several centuries of gradual cooling to glacial temperatures. Erratic climate change limited the development and perfection of complex cultural adaptations, behaviors, and innovations uniquely suited to cold conditions as these would have limited application when climate again turned warm. In addition, the Pleistocene atmosphere was carbon dioxide (CO2) poor, thus inhospitable to plants, which, in combination with rapid climate change, prevented the development of the sophisticated behaviors and technologies needed for the kind of intensive plant and animal procurement and environmental manipulation that might first enable and later support agriculture (Richerson, Boyd, & Bettinger, 2001) (Figure 3). This not only affected local resource availability but likely also created strong selective pressure for cultural adaptation, which is faster than genetic adaptation and therefore better able to keep pace with high frequency, high amplitude variations in climate (Richerson, Boyd, & Bettinger, 2009).

Figure 3. Top: Filtered δ‎18O Greenland ice core data showing markedly less variable and warmer Holocene versus Pleistocene paleotemperatures (data from Ditlevsen, Svensmark, & Johnsen, 1996). Middle: Reconstruction of Pleistocene and Holocene atmospheric CO2 derived from Antarctic ice core data. Bottom: Reconstruction of Holocene atmospheric CO2 derived from the same Antarctic ice core data (data from Barnola, Raynaud, Korotkevich, & Lorius, 1987; Genthon et al., 1987).

In addition to this, at any given time there has been substantial spatial variation in hunter-gatherer behavior and adaptation. These differences accumulated more rapidly in the more favorable and quiescent Holocene, which is particularly evident in the diversity of hunter-gatherer lifeways documented over the last four centuries by observers firsthand, which includes groups with population densities as low as one person per 400 square kilometers in the harsh deserts of Australia, and others with densities as high as one person per .3 square kilometers among the Chumash along the highly productive Santa Barbara coast of California. The ethnographic range in subsistence, technology, and sociopolitical organization is equally impressive.

Given these fundamental differences of biology and environment, most scholars agree that ethnographic accounts cannot be used to interpret Pleistocene records reliably. Wobst (1978) famously described the dangers of uncritical reliance on the ethnographic record: because ethnographic accounts are time-limited, they likely underestimate behavioral diversity, failing to capture important variation in gender- and age-specific activities, and daily, seasonal, and supra-annual foraging objectives, decisions, and outcomes. Moreover, there must surely be behavioral diversity that would be unaccounted for by even a complete and perfectly accurate ethnographic record. That is, all ethnographic data were gathered in an age when hunter-gatherers, no matter how remote, had been in some kind of contact with non-foraging groups. There were also likely prehistoric environments, social configurations, norms, beliefs, etc. for which there are no modern analogues. Identifying diversity unique to the prehistoric past is no simple task and requires modeling potential sources of diversity, a third contentious area of hunter-gatherer economics.

Accounting for Diversity and Change: Intensification, Innovation, and Surplus

The subject of what drove the development of more productive hunter-gatherer economic systems, surplus production, and innovation is critical not only to understanding hunter-gatherers, but also to understanding the factors driving people to develop and adopt other more intensive economic systems, agricultural or otherwise (Boserup, 1965; Morgan, 2015; Morrison, 1994). Many explanations hinge on climate change scenarios where increases in environmental productivity generate the potential for surplus that can support greater human population densities, as was apparently the case after the Pleistocene-Holocene transition (Richerson et al., 2001). Alternatively, decreases in environmental productivity might generate the impetus for extracting more energy from the environment through more labor, technological changes, or greater regional economic articulation.

Such was arguably the case, for instance, along the Southern California Bight, where it has been argued that megadroughts associated with the Medieval Climatic Anomaly (ca. 1.1–0.6 kya) led to more trade between Chumash islanders and mainlanders, facilitated by the invention or adoption of the tomol, a technologically-sophisticated sewn-plank canoe (Arnold, 1992). Others, however, see intrinsic rates of population increase driving economic intensification, the idea being that larger populations have to eat lower down the food chain because more calories are ultimately available in lower trophic levels. Doing so, however, comes at a cost, as extracting this energy usually requires substantial increases in labor, as entailed by the California acorn economies of such groups like the Pomo, Miwok, Mono, and Ohlone (Gifford, 1971; McCarthy, 1993). All these groups were marked by population densities rivaling or exceeding those of prehistoric agricultural groups (Baumhoff, 1963) living in what is now the southwestern and southeastern United States, but such densities were paid for in large part on an acorn-based subsistence economy which was remarkably costly in terms of the amount of labor needed to process acorn meal and remove the tannins from the acorns collected from California’s ubiquitous oak groves (Basgall, 1987; Tushingham & Bettinger, 2013).

Larger populations, however, are modelled not only as having met the threshold of population density necessary to ensure a higher likelihood of maintaining complex technologies like tomols due to better chance of successful transmission of knowledge from one generation to the next (Henrich, 2004; but see Collard, Buchanan, & O’Brien, 2013), but also in reaping the rewards of the considerable investments made in such technologies because the cost of these investments in terms of labor, materials, and maintenance is scalar: it is born by many (as opposed to a few) people over a long period of time, all of whom reap the benefit of such investments (Bettinger, Winterhalder, & McElreath, 2006).

Critical to any discussion of hunter-gatherer economic intensification is the notion of surplus. It is clear surplus production and storage are found mainly in mid-latitude, seasonal environments (mainly in the northern hemisphere) where salmon, acorns, and tubers were collected in bulk in the summer and fall, stored, and then eaten during the winter (Binford, 1980; Morgan, 2012). No doubt storage is a response to seasonality; the question is, how did it develop when most of what is known about ethnographic hunter-gatherers indicates that sharing (sometimes termed “tolerated theft”), rather than hoarding, is the norm (Jones, 1987; Winterhalder, 1996)? Simple models posit larger and more settled populations are the key, with storage tethering groups of people to fewer locales for extended portions of each year (Testart, 1982). More sophisticated models cope directly with the problem of tolerated theft by identifying the conditions under which the notion of private property might develop, in particular among household economies that can be more-or-less self-sufficient and therefore “pay” for their non-communitarian storage behaviors with meat sharing, which frees households to store the fruits of plant-oriented subsistence labor (Bettinger, 2015). Others see a more top-down causal mechanism, where aggrandizing “big men” garner prestige by throwing lavish feasts. What pays for the feasts is of course surplus production, which is appropriated by charismatic leaders. This is an inherently unstable socioeconomic situation, but one which could also conceivably lead to more permanent, inherited leadership roles and entrenched modes of surplus production, and perhaps even food production and domestication (Hayden, 1990). The key linkage here is one of political economy, where the development of intensive, surplus-generating economies is intrinsically tied to changes in social relations, power dynamics, and sociocultural complexity.

Foundational Advances and Discoveries

Game-changing advances in understanding hunter-gatherer economies hinge on theory, modeling, and empirical discoveries. The main theoretical advances are those found in the theory of cultural ecology, economic and evolutionary modeling, and applications of behavioral ecology to questions of human subsistence and subsistence related behaviors. Bridging the gap between theory and empiricism are discoveries related to the development of modern human economic behavior and the evolution of broad-spectrum diets and low-level food production. In addition to the seminal ethnographic work by people like Richard Lee among the Dobe !Kung, which has already been covered, a sample of empirical observations of groups living in Africa, South America, and Australia in the late 20th century are included here given their import to tracking diversity in hunter-gatherer economies.

Cultural Ecology

Julian Steward was an early proponent of a comparative approach to mapping hunter-gatherer economic diversity; he believed recurrent behavioral patterns found in similar environments were evidence of general ecological adaptations. Steward consequently introduced cultural ecology, a multilinear evolutionary theory designed to explain varied adaptations to different environments (Steward, 1955). Cultural ecology sought to understand how technologies facilitate interaction with local resource structures—the abundance, distribution, and seasonality of targeted foods—to shape other aspects of economic and social life, most notably social structure. Essential in this regard was the culture core, defined as “the constellation of features which are most closely related to subsistence activities and economic arrangements” (Steward, 1955, p. 37). Secondary (or non-core) cultural features distinguish cultures somewhat superficially and include cultural elements that are determined by history and can be transmitted and produced through diffusion or innovation. Perhaps most famously, Steward described “family band” socioeconomic organization—typified by the Great Basin Shoshone and characterized by small, nuclear family groups that are annually mobile and thinly spread on the landscape—as a culture core response to sparse, unpredictable resources procured and processed using relatively sophisticated plant gathering and processing technologies like burden baskets, manos and metates, baskets, seed-beaters, winnowing trays, and baskets (Steward, 1938). Similar technologies used in more productive environments and more complex technologies used in arid environments like those found in the Great Basin should produce different socioeconomic arrangements, as Steward observed elsewhere, including southern Africa, Australia, and the Philippines (Steward, 1955).

Foragers and Collectors

Binford (1980) defined a continuum of “foragers” and “collectors” to explain variability in hunter-gatherer settlement systems and archaeological site formation processes. The model articulates a range of adaptive strategies pursued by mobile groups, with foraging and collecting on either end of the spectrum. Foragers are residentially mobile, a strategy involving moving from place to place frequently and “mapping on” to resources as they become available. Collectors are logistically mobile, a strategy where people are more tethered to residential bases and resource acquisition involves scheduling the exploitation and storage of specific foods obtained by specialized task groups. Collectors (like the Tlingit with their specialized technologies) prepare for an array of activities that will take place at different locations throughout the year, so there is more investment in offsite gear and specialized equipment. Archaeologically, collector strategies are associated with more site types (base camps, temporary camps, locations, caches), home bases tend to have larger assemblages (reflecting longer settlement duration) and contain food refuse brought from distant locations, and artifacts include more curated and specialized tools. Foragers (like the Alyawara with their generalized technologies) tend to be located nearer the equator where seasonal shortages tend to be rare, while collectors tend to be found in more seasonal climates in middle latitudes.

Binford argued that tendencies toward one or another of these patterns were predicted by effective temperature, a proxy for environmental productivity and seasonality. Where effective temperature is high (in the tropics and subtropics) and resources are evenly distributed across space and time, hunter-gatherers tend towards the forager end of the spectrum. Where effective temperature is moderate, resources are unevenly distributed in space and time, leading collectors to collect resources en masse when they are available during productive seasons and storing said resources for when resource productivity is low. The model is a deterministic one, with seasonality defining by and large the nature of hunter-gatherer economic behavior. Its implications, however, are profound in that they suggest that surplus-producing economies evolved as responses to both the spread of people into increasingly seasonal latitudes and the development of more pronounced seasonality in these latitudes during the Holocene.

Travelers and Processors

The traveler-processor model (Bettinger, 1999; Bettinger & Baumhoff, 1982) was developed to explain cultural variation and change in the North American Great Basin. The model follows the logic of optimal foraging theory (patch choice and diet breadth models) derived from behavioral ecology and establishes a typology of adaptive strategies that is superficially similar to the forager-collector model, but differs in that it highlights the competitive fitness of groups by defining specific relationships among population, settlement, and subsistence patterns. In the traveler-processor model, as populations move from low densities (travelers) to high densities (processors), people increasingly rely on more costly-to-process plant foods like seeds, nuts, and tubers.

Processors have a broad spectrum diet and, because processing tasks typically fall to females, women’s labor becomes more valuable, which may lead to higher populations and lower rates of female infanticide. The model explains the rapid replacement of travelers by Numic-speaking processors in the Great Basin: as the Numa engaged more in more intensive plant collection and processing strategies, population densities increased. These groups consequently gained footholds in territories previously occupied by groups practicing traveler strategies, who only exploited a fraction of the biotic productivity that processors did. In this model, Numic travelers simply out-ate and out-reproduced the people they replaced due to the greater amount of calories available to them relative to the pre-existing travelers, spreading from southeastern California across the Great Basin in about the last 1,000 years. The model is important because it was one of the first to use human behavioral and evolutionary ecology—ways of tracking the evolution of human behavior through microeconomic models—to make explicit predictions as to how competition between two different economic strategies might play out over relatively long timespans (Broughton & Cannon, 2010; Smith & Winterhalder, 1992), with important implications regarding how farming adaptations might displace foraging ones as well (Kennett & Winterhalder, 2006).

Anatomically Modern Humans and the Upper Paleolithic

In general, the evolution and spread of anatomically modern humans (AMH) during the Upper Paleolithic (UP) in Eurasia and Late Stone Age (ca. 50–14 kya) in Africa appears associated with fundamental changes in hominid subsistence economies. Earlier, Middle Paleolithic economies affiliated with species like Neanderthals tended to focus on hunting large game and, in the Mediterranean, slow-moving, slow-growing tortoises and some mollusks (Stiner & Munro, 2002; but see Speth, 2004). In contrast, AMH economies tended towards more diversity and were characterized by exploiting more costly to process, faster-moving, faster-maturing, smaller-bodied prey like rabbits and birds as well as small seeds. This is well expressed in the Middle (MSA) and Late Stone Age (LSA) deposits at Eland’s Bay and Ysterfontein Rockshelter on the South African Coast, where MSA deposits dating before 50 kya are dominated by larger shellfish and tortoise (Klein et al., 2004; Steele & Klein, 2005). LSA deposits contain seabirds and smaller shellfish and tortoise remains. Similar patterns are evident at Vale Boi in Portugal, where the UP ca. 30 kya is marked by increased reliance on shellfish and rabbits (Manne & Bicho, 2009; Manne, Cascalhiera, Evora, Marreiros, & Bicho, 2011), a pattern also seen in Greece and Spain (Bicho & Haws, 2008; Cortés-Sánchez et al., 2008). A complementary pattern is found in Yuchanyan Cave in south China, where very late UP (ca. 16 kya) archaeofaunal data show AMH diet including turtles, small mammals, and aquatic birds (Prendergast, Yuan, & Bar-Yosef, 2009). Finally, there is evidence of a shift to harvesting and baking wild cereals like barley (Hordeum spontaneum) at Ohalo II, in Israel, ca. 23 kya (Piperno, Weiss, Holst, & Nadel, 2004).

In sum, AMH subsistence economies appear to be marked by more diverse diets and, on the coast, more marine-based resources than their Neanderthal and other archaic Homo forbearers. These diets focused on smaller-bodied prey like rabbits that yielded fewer calories per unit of time spent pursuing and processing than did earlier diets which were focused on large fauna. Some see this shift as representing a new environmental niche occupied by AMH, one consisting of smaller, more diverse, and more costly resources (Klein, 2008). Others see this change as brought about at least in part by technological innovation or increasing population densities (Tortosa, Bonilla, Ripoll, Valle, & Calatayud, 2002; Steele, 2012). These hypotheses are not mutually exclusive—the shift to a new niche could indeed have been driven by demographic or technological change. What is interesting is the degree to which extracting more calories by exploiting lower-return resources affected overall AMH evolutionary success in light of competition with archaic Homo, the idea being that more calories allowed for the growth of larger populations who, in essence, simply out-ate and out-reproduced the Neanderthals, Denisovans, and other Archaics living across Africa and Eurasia during the Late Pleistocene (Klein, 2001, 2009). The applicability of the traveler-processor model in this regard is telling in that even at this early stage in human economic development, it appears more productive economic systems outcompeted less productive ones.

The Broad Spectrum Revolution

The Broad Spectrum Revolution (BSR) is the term Kent Flannery (1969) used to describe change in hunter-gatherer subsistence practices from narrow (e.g., only large-bodied ungulates) to broad (e.g., including large and small mammals, birds, fish, amphibians, invertebrates, tree nuts, legumes, and grass seeds). This insight was drawn primarily from observations of the late Pleistocene and early Holocene archaeology of the Near East (Braidwood & Howe, 1960; Flannery, 1965; Garrod & Bate, 1937; Hole & Flannery, 1968; Hole, Flannery, & Neely, 1969; Perrot, 1966), and informed by a then-recent recent theoretical contribution from Binford (1968). For Flannery, changes in the resource base of human foragers led to changes in social practices like food storage and gendered division of labor. These set the stage for the domestication of plants and animals, the origins of intensive irrigation, and both the social complexity and environmental deterioration that comes with agricultural economies.

The central logic behind these changes rests on an equilibrium model describing the relationship between human demography and resource availability. In this model, the initial shift to a broad-spectrum diet would not happen in places where narrow-spectrum resources were abundant; rather it would happen on the less-favorable margins of such places. Broad-spectrum diets would therefore enable human groups to live in both kinds of environments without exceeding the limits of the resource base as a whole, and ultimately, exploitation of a broad range of resources would take hold in both places. Likewise, cultivation of wild grasses (for example) would not be necessary in those places where wild grasses were naturally abundant, but would maintain the population-resource equilibrium if practiced “around the periphery of the zone of maximum carrying capacity” (Flannery, 1969, p. 80). Here, Flannery alluded, was where such plants would be domesticated, alongside the continued exploitation of a wide range of other wild resources, as he claimed was the case along the “hilly flanks” surrounding terminal Pleistocene Mesopotamia, where some of the earliest domesticates have been identified.

Superficially, the underlying logic of the BSR is the same as the logic behind the process of resource intensification (sensu Boserup, 1965) in hunter-gatherer economies (Morgan, 2015) and is akin to the types of more costly but higher-output subsistence behaviors predicted by the diet breadth model in the context of increasing human population density and/or environmental change (i.e., where fewer encounters with high-ranking prey due to overhunting or changes in resource density or distribution necessitate a shift to eating more costly, lower ranked resources like small faunas, seeds, and nuts). For most applications, however, the BSR is a description rather than an explanation of hunter-gatherer economic change.

Nevertheless, the notion of the BSR has been a prominent feature of research on late Pleistocene and early Holocene hunter-gatherer adaptations around the world, as well as the early origins of agriculture. Data-oriented archaeologists increasingly identify cases that match the basic descriptions of the BSR, for example in China (Guan et al., 2014; Liu, Duncan, Chen, Liu, & Zhao, 2015; Liu et al., 2010, 2011; Yang et al., 2012) and in the Near East (Stiner, Munro, & Surovell, 2000; Stutz, Munro, & Bar-Oz, 2009; Weiss, Wetterstrom, Nadel, & Bar-Yosef, 2004). More recently, the BSR has re-entered discussions about the merits of theoretical approaches to the evolution of human subsistence (Bird, Bliege Bird, & Codding, 2016; Gremillion, Barton, & Piperno, 2014; Zeder, 2012), and the discussion is heated.

Low-Level Food Production

Often overlooked in discussions about the diversity of hunter-gatherer lifeways is the fact that hunter-gatherers, not farmers or herders, created the domesticated plants and animals that enabled the agricultural revolution. Archaeologists have looked for evidence of this creative process for many years, often identifying transitional phases to connect evidence for an earlier foraging-based subsistence strategy—i.e. one based on “food procurement”—to a later one based on farming and animal husbandry—i.e. “food production” (Binford, 1968; Braidwood, 1952; Childe, 1951; Flannery, 1968). For example, in terminal Pleistocene southwestern Asia, Natufian hunter-gatherers relied largely on wild gazelle, tree nuts, and intensive collection and perhaps cultivation of arid-adapted wild cereals like rye (Secale spp.). Though more sedentary than their Kebaran predecessors, considerable mobility characterized the Late Natufian during the Younger Dryas (ca. 13–11.5 kya), immediately prior to the earliest evidence for exploitation of domesticated cereals during the Pre-Pottery Neolithic A (PPNA), ca. 11 kya (Makarewicz, 2012). In Mesoamerica, the cereal domesticate was maize (Zea mays), a tropical grass derived from teosinte (Balsas teosinte) some 9,000 years ago or more. Here, however, maize domestication was affiliated with a long period of intensive use of wild plants and what were probably already-domesticated plants like squash (Cucurbita spp.), after the Younger Dryas (Ranere et al., 2009). Both cases were marked by a millennium or more of intensive exploitation of wild plant foods and the development of varying degrees of storage and sedentism, but their evolutionary connections to the complex farming societies that developed thereafter are diverse and remain contentious.

In any event, the very notion of a transitional phase that divides the Paleolithic from the Neolithic (the Epipaleolithic and the Mesolithic for example), reveals that many scholars see agricultural and non-agricultural economies as fundamentally distinct, but that the shift from one to the next ought to be archaeologically visible. This middle-ground between foraging and farming is something that Smith (2001) called “low-level food production.” However, rather than arguing for a transitional phase, he argued that low-level food production was an evolutionarily stable economy in its own right, and like any other archaeologically visible cultural configuration, it might persist for hundreds or even thousands of years.

Typically, archaeologists look at the definitions of food procurement and food production as set by an arbitrary threshold defining the relative importance of domesticated plants or animals to overall subsistence (e.g., 50% for Zvelebil [1996]; 75% for Winterhalder and Kennett [2006]). Others simply look at the entire sequence as a continuum of interactions among humans and other taxa that involved progressive human input and progressive evolutionary change (Ford, 1985; Harris, 1989; Rindos, 1984; Zvelebil, 1996). Smith side-steps both of these issues by defining low-level food production as a long period of successive (and sometimes progressive) change in human subsistence behavior with domesticated plants and/or animals evolving somewhere in the middle.

Low-level food production therefore describes a subsistence system that incorporates a broad array of different resources requiring a broad range of inputs and tactics of exploitation. At once, this vast “middle ground” of human subsistence behavior may include both forager/traveler and collector/processor hunter-gatherers, the BSR, intentional management of wild resources and landscapes, pre-domestication cultivation, incidental domestication, incipient agriculture, various kinds of horticulture, and the many processes of resource intensification. Depending on the scale of research, low-level food production thus provides a conceptual framework for thinking about a broad swath of human behavioral diversity. However, in and of itself it is not a model for understanding or explaining how human groups operate, or how these operations evolve. It is simply an observation that the categories of “forager” and “farmer” or “food procurement” and “food production” are neither as monolithic or dichotomous as some suggest (Hunn & Williams, 1982), nor are they mutually incompatible economic types.

Evidence for low-level food production varies in quality, scale, and scope. Ethnographic observation, oral history, and archaeology all provide examples of people otherwise classified as hunter-gatherers who also cultivate tobacco, seed, and root crops (Deur, 2002; Steward, 1930, 1938; Tushingham & Eerkens, 2016), create and maintain productive aquatic ecosystems (Deur, Dick, Recalma-Clutesi, & Turner, 2015; Whitaker, 2008), and manipulate entire vegetation communities with both fire and mechanical means to enhance the productivity of both food and organic raw materials (Anderson, 1999, 2005; Bird et al., 2016; Lewis, 1973). Increasingly, archaeology points to very early evidence for the exploitation of small-seeded grasses ancestral to eventual plant domesticates, thousands of years prior to any evidence for the morphological attributes of domestication (Weiss et al., 2004; Willcox, 2012). Importantly, it looks as if the various attributes of domestication took hundreds to thousands of years to accumulate, suggesting that in many places the process of domestication was a protracted affair (Fuller, 2007; Purugganan & Fuller, 2011). Likewise, persistent exploitation of wild resources continued long after the initial domestication of plants and animals, with considerable global variation in the relative social, economic, and dietary importance of them, from the Neolithic to the present day.

The notion of low-level food production as a stable adaptive strategy has its critics. First, global ethnographic datasets (Hunn & Williams, 1982; Murdock, 1967) simply do not point to low-level food producing systems in the way that Smith describes them. Subsistence systems tuned to the spatial and temporal availability of wild plants and animals may be incompatible with the demands of cultivation, harvest, processing, and storage associated with food production (Bellwood, 2005; Flannery, 1968). Second, occasional, or sporadic production of high-investment, low-return resources may actually be a feature of instability, rather than stability (Bettinger et al., 2007). Furthermore, sporadic, low-intensity exploitation of wild plants and animals is unlikely to generate the environment of selection required to drive the process of domestication. Rather, the long middle-ground characterized by low-level food production (and the many archaeological “cultures” that mark it) likely marks a period of resource volatility where individual decisions about what to exploit are situational, perhaps reflecting a global reorganization of plant and animal communities from the end of the Pleistocene and the Younger Dryas to the middle Holocene, along with changes in the demography, technology, and social conditions underwriting the human exploitation of them (Richerson et al., 2001). In some cases, these conditions drove the domestication process but in most cases they did not (Bettinger, Barton, & Morgan, 2010). Low-level food production is therefore a useful way to describe a period of volatile social and technological change, but in itself has little explanatory power.

Late 20th Century Ethnographies

Some of the most useful contributions to our understanding of hunter-gatherer economies come from ethnographic work performed by human behavioral ecologists among groups living in Africa, South America, and Australia. This work hinged on seeing hunter-gatherer economies less as monolithic wholes with common goals, but rather as amalgams of different economic behaviors, each with their own incentives posited to make for better adapted, multifaceted group economies. Among the Hadza of Tanzania, for example, Hawkes et al. (1989) investigated the evolutionary origins of senescence among humans (it is rare for animals to live so long after their reproductive capacities have ended) and came to the conclusion that older individuals, especially grandmothers, play a significant role not only in childcare and rearing, but also in group provisioning, which generates more calories for the group as a whole. The implication is that the division of labor and increased economic output made possible by older members of a group are important parts of the evolution of some of the most basic attributes of human economic and social activity. Similar observations have been made among the Meriam of the Torres Strait and the Martu of northwestern Australia, where several researchers have found that the less risky (but also lower-return) foraging and hunting activities of women and children help underwrite the more risky (but also higher reward) hunting activities usually performed by men.

Here again, differential foraging goals and the division of labor along age and gender lines underwrite the overall economic success of the group (Bird & Bliege Bird, 1997, 2000). Alternative foraging goals are also seen among the Aché of Paraguay, where men’s hunting decisions have been linked more to their garnering of prestige as successful hunters than to their immediate personal or familial caloric gain (Hawkes, 1991). A similar pattern has been found among the sea turtle hunting Meriam Islanders (Smith & Bliege Bird, 2000). The key here is that prestige-seeking behavior may confer greater access to mates for successful hunters, help cement long-term group social obligations, and also occasionally provide very high return resources which can free up other, less risk-seeking individuals like children, older males, and females to generate the bulk of the calories consumed by the group. This type of research, of which the above represents only a small part, is intriguing because it shows how different and what appear to be counterintuitive individual economic choices can articulate to result in better-adapted group economies. If true, this points to how the evolution of non-kin based economies developed in human societies, a factor which was critical to the development of the much larger, articulated economies we see in agricultural, urban, and modern human societies across the Holocene and into the present day.

The Contemporary State of Hunter-Gatherer Research

Much contemporary work among hunter-gatherers is activist in nature: geared towards recognizing indigenous rights and helping such groups cope with contact and articulation with the global political economy. This work runs the gamut from establishing the first regular contact with disenfranchised groups like the Mascho Piro in Amazonia, who face increasing incursion into their traditional sphere by loggers and miners, to helping the whale hunting Inupiaq of northwestern Alaska cope with rising sea levels, the costs and benefits of participating in the region’s oil economy, and the constraints of federal and global prohibitions on certain subsistence resources. This type of work looks more at cultural preservation in the face of articulation with global, capitalist economies than with description or analysis of these economies. But there are several places where hunter-gatherer economies have persisted or even reversed course from complete articulation with the global scene. What follows provides a few examples of recent work while also drawing attention to examples of some of the more salient contemporary archaeological work on hunter-gatherer economies, where the opportunity for original hunter-gatherer economic research is considerably more varied.

Siberian Hunter-Gatherers in the Post-Soviet Era

One of the more interesting late 20th and early 21st-century developments in hunter-gatherer economic research consist of changes wrought on indigenous economies like those of the Giliak and the Orochen Evenkis in Siberia by the collapse of the Soviet Union in 1991. Hunting, herding, and fishing groups like these were incorporated into the Russian Empire in the late 19th and early 20th centuries, paying taxes often in the form of skins or pelts, which integrated their hunter-gatherer and herding economies with the larger Russian and world economy. With the rise of the Soviet Union during the 20th century, traditional lifeways and economic activities suffered at the expense of collectivization and subsidization by the Soviet state apparatus. But after the Soviet collapse in 1991, state subsidies disappeared and small groups of Orochen Evenki living in the Baikal region turned to traditional reindeer herding and wild animal hunting not only to subsidize their income, but also as focal economic pursuits. Anthropologists like Anderson (1991, 2006) track the economic and social changes undergone by these people, noting the resilience of their traditional use of space and social structure to external change and how the resurgence of their hunting economy has become a viable economic pursuit. Critical to the subject at hand is that hunting wild animals forms a significant part of the Evenki subsistence base and that the return to this economic activity came after incorporation into state-level and state-sponsored economies largely reliant on agriculture, throwing once again the notion of unilinear evolutionary trajectories out the door at the dawn of the 21st century.

South American Ethnography and Archaeology

Contemporary studies of hunter-gatherer economies in South America fall into three groups. Where the foraging lifeway was fully (or nearly so) supplanted by food production and domestication (e.g., in highland Peru), focus is on the timing, causes, and nature of the transition and how it changed facets of social and political life (Dillehay, 2011). Where hunting and gathering persisted as the dominant economy until contact with Europeans (e.g., Patagonia), attention centers on human contributions to Pleistocene megafaunal extinctions and more general foraging dynamics and their change through time (Borrero, 2013). Critical in this regard is the effect of climate change on hunter-gatherer adaptation, where some identify depopulation and abandonment, but also more intensive use of favorable locales brought about by middle Holocene warming (Garvey, 2012; Yacobaccio & Morales, 2005), substantial settlement pattern shifts during the Medieval Climatic Anomaly (ca. 1.1–0.6 kya) (Morales et al., 2009), and evidence of a shift away from domesticates (and towards consumption of wild species) during the Little Ice Age, 600–150 years ago (Gil et al., 2014). Lastly, where relatively small groups survive currently on wild resources or did until the very recent past (e.g., the Aché), ethnographic studies center on the microeconomics of hunting and gathering (Janssen & Hill, 2014). The contributions of researchers working among the Aché have already been described, but the import of these studies cannot be understated given the microeconomic data this research records and the utility of such quantitative measures to understanding the economics behind subsistence choices, differential foraging goals, the division of labor along age and gender lines, and the way cooperation develops among kin and non-kin alike.

North American Archaeology

Though hunter-gatherers inhabited the continent for at least seven millennia before the initial domestication of plants in what is now the southeastern United States and for eleven millennia before the wholesale adoption of the Mesoamerican domestic triumvirate of maize, beans, and squash in the American Southwest and eastern United States, most 21st century hunter-gatherer oriented archaeological research focuses on those areas where hunter-gatherers were still on the scene when European explorers arrived in the 16th and 17th centuries: the west coast and the arid regions west of the 100th Meridian.

Most of this research is ecologically-focused and situated to explain the development of more intensive, higher-yield economies. In coastal Texas, for example, Hard and Katzenberg (2011) use isotopic data on human skeletal material to argue that there was a switch to eating low-return plant foods there beginning around 2.5 kya, which Johnson and Hard (2008) attribute to demographic increase and population packing—to the point that population densities approached those of maize-based agriculturalists, which may explain why agriculture was never adopted in the region: hunter-gatherer economies were just as or more productive than agricultural ones. Conversely, on the California coast, researchers like Erlandson et al. (2009) document shifts to eating large-bodied pelagic fish in the late Holocene, which they attribute to the invention or adoption of sophisticated technologies like canoes, nets, and fishhooks which facilitated the capture of large-bodied, high-return prey. Here, improvements in technology led to increased economic efficiency and eating higher on the food chain, in contrast to the patterns seen in coastal Texas and across much of arid western North America.

There has also been a resurgence in looking at the way hunter-gatherers modify their environment to increase the yield of desirable food and other resources by burning, in a vein similar to that seen in Australia (Anderson & Rosenthal, 2015; Cuthrell, Striplen, Hylkema, & Lightfoot, 2012; Lightfoot et al., 2013). Seen in archaeological (Cuthrell, 2013) and paleoecological datasets (Klimaszewski-Patterson & Mensing, 2016) is convincing evidence that native Californians modified the biota of the state to such an extent that the historical California landscape might be seen as more the result of cultural than natural processes. This is important because it shows the degree to which non-farming societies modify the landscape to construct ecological niches conducive to supporting large, semi-sedentary hunter-gatherer populations (Broughton, Cannon, & Bartelink, 2010). The extent to which hunter-gatherers shaped the evolution of global environmental systems, and indeed their own biocultural evolution, through various forms of landscape modification and management over their entire history, is an emerging and important body of research worldwide (Archibald, Staver, & Levin, 2012; Bird et al., 2016; Boivin et al., 2016).

The Future of Hunter-Gatherer Studies

Given the proliferation of the global economy and the spread of logging, mining, and other developments into the few lands still occupied by people who are largely reliant on wild resources, it seems clear that future study of living human foragers will be limited but will also require considerable creativity reliant on the occasional ethnography, perhaps geared more towards understanding how and why people supplement their economic output with wild resources in the face of economic hardship. Archaeology holds more promise, in part because of the geographic and temporal scope of materials available to archaeologists but also because archaeology retains the ability to identify hunter-gatherer behavioral diversity without necessarily relying on ethnographic analogy.

So what is left to learn about hunter-gatherers through observation of contemporary people? There are of course a few groups of people still identifiable as hunter-gatherers (e.g., the Hadza, Raute, Aché, and Martu) and it does not matter if they have always lived as foragers or if they have returned to a foraging life after exclusion from some other economic strategy, as is the case with the Orrorin Evenkis, the Mikea, and arguably, the !Kung San (Anderson, 2006; Tucker, 2002; Wilmsen, 1989). The opportunity to learn how people manage to provision themselves without agricultural products, market exchange, or state support exists, but the range of opportunities is smaller than it was 100, 50, or even 20 years ago. Methods of study, therefore, must be creative.

In addition to those few remaining cases where foraging is “traditional,” there are also opportunities to learn from people living in the interstices of contemporary life. Much in the same way that some people carve out opportunities for low-level food production in urban environments (Balmori & Morton, 1993), others manage to make ends meet through collecting and hunting, particularly during periods of economic stress. In the future, opportunities for understanding how hunting and gathering actually work may come from the demand for food in cases of extreme economic hardship or sociopolitical instability. Though somewhat nontraditional from an anthropological perspective, if the objectives of studying hunter-gatherers are aimed at understanding how individuals and groups function and evolve in the absence of state bureaucratic structure, capitalist economic structure, or domesticated plants and animals, it doesn’t really matter where the insight comes from.

Many of these studies will no doubt focus less on materialistic concerns and more on questions of human rights and the abuse of them. Indigeneity, what it means, and how it shapes the global political landscape, will feature prominently in these discussions. A growing number of contemporary indigenous communities record their own economic data, often referred to as Traditional Ecological Knowledge, with and without anthropologists. Many tribal communities, particularly in North America, have large cultural programs with archaeologists on staff, and there is an ongoing “first foods” movement, with health programs often explicitly promoting traditional foods in the diet of native communities. This shared interest in subsistence and diet can link archaeology and indigenous communities in a way that has not yet been fully realized. Furthermore, the effects of commercial, economic, and industrial development on traditional life will also be of interest. The extent to which these interests will help us understand hunter-gatherer economies is an open question. In all of this will still be opportunities to understand the social impacts of technological change, and indeed, the nature of technological change itself. Likewise, these conditions engender opportunities to learn more about the intergenerational transfer of information, wealth, and potential. Further, they also may afford opportunities to learn about the nature of conflict that comes with intergenerational difference in ethics, language, and practice.

Beyond this, archaeology is clearly poised to reveal more about hunter-gatherer economies than any other source of information: the timespan under consideration extends well into the Pleistocene and the geographic scope is worldwide. But are we prepared to let archaeology be the sole source of information about hunter-gatherer life? How will the social sciences deal with misunderstandings about our “Pleistocene predisposition” (Eaton, Cordain, & Lindeberg, 2002; Eaton, Shostak, & Konner, 1988) and the nature of the “environment of evolutionary adaptedness” (Cosmides & Tooby, 1987). How will archaeologists deal with the limits of their tools to interpret and understand the proxy evidence for past behavior? Likewise, how will archaeologists improve their methods for analyzing broad patterns of archaeological data that attest to the diversity of hunter-gatherer lifeways? And how can they do this while protecting cultural heritage? Finally, what will be the role of descendant communities in global archaeological work?

As previous sections make clear, however, study of hunter-gatherer economies requires and will in the future require more collaboration with a variety of disciplines beyond anthropology. Interpretations of prehistoric foraging economies and their evolution rely on detailed paleoclimatological and paleoenvironmental data to identify the opportunities and challenges associated with past environments. Ethnobotanical and genetic research provides crucial information pertaining to the availability, edibility, and domesticability of plants. To the extent that foraging behaviors are influenced by conspecific competition and/or social learning, demography is an important line of evidence. Studies of modern diet and nutrition can inform model building, allowing human behavioral ecologists to incorporate things like nutrient bioavailability and complementarity, for example. Beyond understanding the physical aspects of foraging economics, anthropologists increasingly recognize the importance of psychology and cognitive neurobiology for understanding how human motivations, decision-making, teaching, and learning affect foraging behaviors.

In sum, future research on hunter-gatherer economies will likely be largely archaeological but will also be strengthened by creative approaches to studying extant peoples living on the periphery of the global market economy and state apparatuses. The work will clearly be interdisciplinary, falling as it does between the approaches of the physical and social sciences and the more humanities-oriented research of some cultural and activist anthropologies. Beyond data acquisition and access to study populations, however, the real challenge will be to recognize and try to operate outside of the constraints of whatever trope is currently in vogue with regard to how humans interact with and extract economic benefit from wild resources. As the history of hunter-gatherer research shows, how hunter-gatherers and their economies are conceptualized—whether brutish, noble, primitive, adapted, affluent, simple, or complex–often tell us more about ourselves and our current sociopolitical and ideological milieu than about the foraging societies themselves. Doing so requires, first and foremost, that the object under study be human economic behavior in all its forms. Put another way, understanding how groups in the past solved problems of resource acquisition, storage, and distribution generates greater understanding of human economic behavior as a whole; whether it is based on wild or domestic products is to some extent irrelevant.

Conclusion

There are no easy answers regarding exactly what hunter-gatherers are or are not or what their evolutionary relationships are with agriculturalists. While there can be no question that hunting, fishing, and gathering were the earliest modes of human economic production and that hunting and gathering preceded and ultimately gave rise to agriculture, these are superficial and facile generalizations. Difficulty making more nuanced generalizations stems from the fact that until relatively recently (in evolutionary terms), hunting and gathering was the only mode of human economic production, whether it be found in the tropics, the coast, the desert, in the steppes, temperate zones, or nearer the poles. Add to this the fact that hunting and gathering is older than our species and persisted in this multiplicity of environments for some 200,000 years after the evolution of Homo sapiens and it is clear that the opportunity to develop diverse lifeways centered on wild resources with no ethnographic or historical analog was indeed considerable. Given this, it bears repeating that hunting and gathering is merely an economic orientation geared towards exploiting wild plants and animals that subsumes degrees of diversity in population density, technology, social structure, and ideology that are comparable to or even exceed those associated with other modes of economic production.

When it comes to the relationship of hunting and gathering to agriculture, it is worth considering the types of behaviors agriculture actually requires. Agriculture necessitates bulk resource acquisition, generation of surplus, storage, considerable labor inputs, almost certainly the division of labor, specialized technologies, environmental manipulation, and most likely a reorientation of social norms geared towards recognizing private property, if only to allow for the incentive to store costly, bulk-acquired domesticated crops (Bettinger, 2006). It is abundantly clear that all of these behaviors are found in one form or another among what many researchers term “complex” hunter-gatherer societies, from the ethnographically-documented sedentary and storing societies of coastal western North America, to the Mesolithic societies of Europe and the Jomon in Japan in the early Holocene, to the Natufian hunter-gatherers of southwest Asia during the terminal Pleistocene. It may even be the case that an alternative strategy of broad spectrum, low-level food production may represent an evolutionary pathway outside of the forager-farmer continuum. Given this, it is clear that most if not all of the technological, social, and even ideological behaviors associated with agriculture were present in hunting and gathering societies before the development of economic modes of production centered on domesticates. It is also clear that engaging in these types of behaviors does not necessarily mean that agriculture will eventually develop out of them.

This suggests that, at least in its incipient state, agriculture was not very different than intensive hunting and gathering, the only real difference being the degree to which artificial selection had altered the wild plants and animals that early agriculturalists exploited. It is indeed entirely possible that hunter-gatherers through the Late Pleistocene and early Holocene experimented with deliberate planting, with most of these experiments ultimately failing. During the Pleistocene this appears due mainly to climatic variably and low CO2 in the atmosphere. During the Holocene this failure was likely due in large part to the failure of social relations of production to develop that would offset the freeloader problem found in almost all small-scale hunter-gatherer societies. The future of hunter-gatherer studies, though faced by many challenges, consequently stands to shed additional light on the origins of human behavioral diversity, and how aspects of this diversity eventually resulted in the requisite mix of the technologies, labor practices, and social norms needed to make agriculture not only work, but also to outcompete hunter-gatherer modes of production over the long haul.

What is it about trains?

The question is, Do autistics obsess about trains, or is this attachment a stereotype that has been repeated so many times that it has become the “go to” neurotypical junk cliché about autistics.

Comment from a train fansite.

Though it is a stereotype, there may be a correlation indeed, based on the few autistic people I know compared to the mass of NTs. What I like about trains is that they’re very harmonious – I like the way a train moves: no abrupt changes in pace or trajectory, but soft, regular moves and always on its railway.

Then the way a train network is organized and connects in many points makes for a great logical circuit. With many possible paths, and many trains at the same time on the network, all the technical/electronic equipment to back up the trains is also fascinating. In this regard, a train is really predictable, logical, and I find that relaxing, reassuring, when at the same time all the technology and logic behind it make it most interesting.

Don’t know if that makes sense to any of you guys, because I’m a NT, so what do I know…

__________________________________________________________________________________________

Is it not possible that trains are wonderful things, and that the qualities of the train, in particular, and as a system, are evident to certain people, regardless of NT or autistic perception? 

Dear psychologists, let us love trains, in the way that dolphins love the sea, and in the way that we love watching dolphins love the sea, and in the way that we envy the dolphin its freedom of movement…

__________________________________________________________________________________________

Boring? No: soothing and hypnotic. A great way to “settle” your brain into calm rhythms. It’s a shame that the U.S. has destroyed its long distance passenger system.

NOTE (with video): A train is a form of rail transport consisting of a series of connected vehicles that usually runs along a rail track to transport cargo or passengers. Motive power is provided by a separate locomotive or individual motors in self-propelled multiple units. Although historically steam propulsion dominated, the most common modern forms are diesel and electric locomotives, the latter supplied by overhead wires or additional rails. Other energy sources include horses, engine or water-driven rope or wire winch, gravity, pneumatics, batteries, and gas turbines. Train tracks usually consist of two running rails, sometimes supplemented by additional rails such as electric conducting rails and rack rails, with a limited number of monorails and maglev guideways in the mix.

This is a clip from the NRK TV-program “Bergensbanen Minutt for Minutt”, that shows the train-ride through beautiful Norwegian landscape. Finse is the highest station on the Norwegian railway system at 1222 meters above sea level.

The original footage is made and owned by NRK, and is licensed under a Creative Commons Attribution 3.0 license.

Bonus Art Links / With ASD appeal? You tell me.

 

https://www.kuksi.com/

http://secretlifeofsynthesizers.com/gallery/

https://www.exploratorium.edu/video/strandbeest-dream-machines-theo-jansen

 

Metaphor, Analogy, Simile / Tragic Sci-Tech Examples

Let’s begin with an immediately comprehensible comparison. 

_________________________

Below: A standard “analogy” in basic physics courses: Does this really make “electricity-magnetism” accessible to the average student?

The problem is, that even basic physics courses assume that the student has “hung out” at the local water plant or was raised by a plumber. If not, the added water system analogy means that the student now must understand the water system in addition to struggling with the electrical system.

Water wheels, grindstones? A bit archaic, no? 

Okay – so the water system analogy isn’t terrible, but here is where the use of analogy drives me bonkers: number, quantity, volume, weight, density, forces …. Comparisons to strange objects are believed to make extremes of number and scale “comprehensible” to the human brain. Again – the assumption is that “equivalents” such as the earth covered in marbles or peas to some “impressive” depth is: 1. meaningful 2. has a possibility of occurring outside of a supernatural “miracle” 3.  will ever be observed by one or more human beings. 4. will reduce the problem of incomprehensible quantity, number, etc in comparison to “human” scale. 

But, “1/18th of the surface area of the sun” makes “Avogadro’s Number” perfectly clear! What was it we were trying to explain? I’ve forgotten, and I have a headache.

Another terrific assumption is that Olympic swimming pools and football fields are perfectly reasonable examples of “intuitive” volumes and areas because everyone has watched the Olympics on TV or has been to a football game.

And a more problematic question: Why are we presenting students with ridiculous   analogies for actual measurable physical phenomenon, when the function of teaching science and technology is to impart awareness and knowledge of  “How the universe works” ? What we’re telling them is that physical properties, relationships and behaviors are baffling; that physical reality measurements are fantastical and incomprehensible. And why must we understand measurements in “relatable ways” at all? Isn’t that a function of mathematics – to make the humanly “ungraspable” available and easier to work with? 

While science education is making physical reality (that we occupy and depend on) “obscure and incomprehensible” religions and politicians are doing the opposite:

Is it any mystery as to why millions of Americans believe that climate change, global warming and other major systemic problems are “government conspiracies?”

And in case one might imagine that biology and other areas are any less idiotic:

 

 

 

 

 

Do Safe Places Exist? / A Modern American Myth

Safety is on my mind.

Are there any “safe places” in which humans may live? Were there ever?

If someone asked me to describe the theme that characterizes my path in life, meaning the inner road that holds a “self” together; choices, decisions, preferences, I see a prey animal ( a rabbit is a good symbol) perpetually seeking safety or at least refuge. This image is ridiculous, not that it doesn’t express a truth: this “environmental circumstance” is true for every child, for every human, for every animal. The revelation  is that my actions trend toward the opposite: intuition and rational observation both declare that there is no safety – so keep moving. Danger comes with standing still, both intellectually and psychologically. Survival skills are the result of survival.

Safety, is of course, a social illusion –

a stitching together of pretty nonsense – TV commercials are the perfect example – pretentious syrupy sermonizing; those 30-second pompous narratives, portraying obnoxious “masters of the universe” destinies: Myths. 

The message is:

Today’s technological wizardry means never having to leave your gated and  surveilled “bunker” thanks to a preemptive perimeter that “keeps out” predatory humans. Living spaces have become military outposts disguised as traditional homes. Everything you need or want can be delivered to your castle door. The truth is that the “wizardry” is the access point for modern predators.  

This “vision” merely extends the 1950s post-war hope for perfection; a modest house, a new car, and a nearby shopping mall. New school buildings for all those new kids being made in new bedrooms and delivered in new and modern hospitals. The promise made was that,

Safety was never having to leave suburbia. 

That illusion should have died a half century ago, with one of the first acts of terror perpetrated by an overlooked and ignored white male with an obsession over being powerless. In my mind, the shift in the idea of safety as a “white privilege” began with the assassination of president Kennedy. A barrier was broken; the shock was that some “nameless, faceless, nobody white guy” could murder Jack Kennedy – a “god” to many people; an American Emperor.

Wealth, power, privilege, status and the Secret Service could not protect the president from one nobody with a rifle. 

Patterns can be so obvious. And yet the American NT Universe is still in denial. The very vulnerability of public schools to mass murder by “one nobody with a gun” fails to generate or motivate a recognition that there are no safe places; OUR CHOSEN LEADERS and We the People have guaranteed that there can not, and will not be any safe places.

Americans have actively and deliberately spent the previous 50-60 years glorifying violence, developing and manufacturing weapons of every conceivable type and destructive power, inflicting violence on foreign people and on our own citizens, and guaranteeing access to weapons, not only to every American citizen, (sane, dangerous,  or unstable) but to every malicious crackpot on the face of the earth.

This stupidity is true mental illness.

__________________________________________________________________________________________

Note below the “definition” of domestic terrorism, which does not actually include “young male” mass shooters.

https://www.fbi.gov/stats-services/publications/terrorism-2002-2005

FBI Policy and Guidelines

In accordance with U.S. counterterrorism policy, the FBI considers terrorists to be criminals. FBI efforts in countering terrorist threats are multifaceted. Information obtained through FBI investigations is analyzed and used to prevent terrorist activity and, whenever possible, to effect the arrest and prosecution of potential perpetrators. FBI investigations are initiated in accordance with the following guidelines:

  • Domestic terrorism investigations are conducted in accordance with The Attorney General’s Guidelines on General Crimes, Racketeering Enterprise, and Terrorism Enterprise Investigations. These guidelines set forth the predication threshold and limits for investigations of U.S. persons who reside in the United States, who are not acting on behalf of a foreign power, and who may be conducting criminal activities in support of terrorist objectives. 
  • International terrorism investigations are conducted in accordance with The Attorney General Guidelines for FBI Foreign Intelligence Collection and Foreign Counterintelligence Investigations. These guidelines set forth the predication level and limits for investigating U.S. persons or foreign nationals in the United States who are targeting national security interests on behalf of a foreign power.

Although various Executive Orders, Presidential Decision Directives, and congressional statutes address the issue of terrorism, there is no single federal law specifically making terrorism a crime. Terrorists are arrested and convicted under existing criminal statutes. All suspected terrorists placed under arrest are provided access to legal counsel and normal judicial procedure, including Fifth Amendment guarantees.

Definitions

There is no single, universally accepted, definition of terrorism. Terrorism is defined in the Code of Federal Regulations as “the unlawful use of force and violence against persons or property to intimidate or coerce a government, the civilian population, or any segment thereof, in furtherance of political or social objectives (28 C.F.R. Section 0.85).

This “definition” (political or social objectives as motivation) does not include young male CHILDREN who are seeking revenge, are mentally unstable, or otherwise determined to become infamous, and are failing to make the transition to adulthood. Psychological neoteny has pushed the acquisition of adult thought and behavior well into the 20s, 30s or never– for males and females. Little support, guidance or instruction on “how to become an adult Homo sapiens” is provided in American culture: THE MODEL presented to male children is that of a testosterone-fueled rage machine whose only “solution” to overwhelming feelings of powerlessness and impotence is to pick up a gun and shoot to kill… 

The FBI further describes terrorism as either domestic or international, depending on the origin, base, and objectives of the terrorist organization. For the purpose of this report, the FBI will use the following definitions:

  • Domestic terrorism is the unlawful use, or threatened use, of force or violence by a group or individual based and operating entirely within the United States or Puerto Rico without foreign direction committed against persons or property to intimidate or coerce a government, the civilian population, or any segment thereof in furtherance of political or social objectives.
  • International terrorism involves violent acts or acts dangerous to human life that are a violation of the criminal laws of the United States or any state, or that would be a criminal violation if committed within the jurisdiction of the United States or any state. These acts appear to be intended to intimidate or coerce a civilian population, influence the policy of a government by intimidation or coercion, or affect the conduct of a government by assassination or kidnapping. International terrorist acts occur outside the United States or transcend national boundaries in terms of the means by which they are accomplished, the persons they appear intended to coerce or intimidate, or the locale in which their perpetrators operate or seek asylum. 

Morning Thoughts / “$$$$ research” that proves “the obvious”

Headache: Reading research on brain development in childhood; what is “normal” and what is “not”.

(Hint: “normal” is the state of tolerating brain damage because it adapts one to high stress human social environments and unhealthy “deprived” physical environments. Those individuals who become “sickened” by conditions that harm living things are defective, like smoke alarms that actually respond to smoke!)

__________________________________________________________________________________________

I’m not “picking on” these specific people: this article is merely one of thousands that disclose a severe problem – billions of $$$ being spent to “research the obvious” and so little is spent on real preventive help for children and families. It’s SO FRUSTRATING for an Asperger: the neurotypical limitation of “letting things get screwed up” and only then “coming up with” some kind of “technical fix” that too often merely compounds suffering with new unintended consequences – Dig that hole deeper and deeper, is the prime directive for neurotypicals.  

Effects of Early Life Stress on Cognitive and Affective Function: An Integrated Review of Human Literature

The brilliant conclusion? Childhood abuse, neglect and trauma f**k with living things. Brilliant?

No, repetition of the OBVIOUS pattern:   

The goal of all this research? To somehow “fix” screwed up brains using high-tech engineering to “repair” what’s broken. Not brilliant! The neurotypical pattern is to perpetuate the social structures and toxic environments that damage human beings; let the damage occur, and then send out “recall notices” to come in and have your brain repaired (or further messed up).

The actual “thinking” behind NT behavior is dumb. The illusion that “technical breakthroughs solve problems” is so short-sighted and disproven by social history. Another obvious failure of NT insight into the incredible gap between narcissistic self-assessment and the lack of competency in practical, preventative action. 

Do we want people to be healthy and happy?  Or do we  want people to be f**k’d up? It’s a simple question. 

 

 

Thoughts on Ancient Males / Life in the flesh

In the ancient world a common greeting among travelers was, “Which gods do you worship?” Deities were compared, traded, and adopted in recognition that strangers had something of value to offer. Along with the accretion of ancestor gods into extensive pantheons, an exchange of earthly ideas and useful articles took place. Pantheons were insurance providers who covered women, children, tradesman, sailors and warriors – no matter how dangerous or risky their occupations; no matter how lowly. Multiple gods meant that everyone had a sympathetic listener, one that might increase a person’s chances for a favorable outcome to life’s ventures, large and small. 

404px-Athena_owl_Met_09_221_43 27784514 Brygos_Painter_lekythos_Athena_holding_spear_MET

A curious female type: The goddess Athena is incomprehensible to modern humans. Here she models the Trojan horse for the Greeks.

A curious female: The goddess Athena is incomprehensible to modern humans; and yet for the ancient Greeks, she was the cornerstone of civilization. Here she models the Trojan horse for the “clever” takedown of Troy.

 

 

 

 In The Iliad

…the gods are manifestations of physical states; the rush of adrenalin, sexual arousal, and rage. For the Homeric male, these are the gods that must be obeyed. There is no power by which a man can override the impulse-to-action of these god forces. The gifts of the notorious killer Achilles originate in the divine sphere, but he is human like his comrades; consumed by self pity and emotionally erratic.

In Ancient Greek culture, consequences accompanied individual gifts. Achilles must choose an average life (adulthood) and obscurity, or death at Troy and an immortal name. Achilles sulks like a boy, but we know that he will submit to his fate, because fate is the body, and no matter how extraordinary that body is, the body must die. Immortality for Homeric Greeks did not mean supernatural avoidance of death. To live forever meant that one’s name and deeds were preserved by the attention and skill of the poet. In Ancient Greek culture it was the artist who had the power to confer immortality.

There was no apology for violence in Homeric time. The work of men was grim adventure. Raids on neighbors and distant places for slave women, for horses and gold, for anything of value, was a man’s occupation. The Iliad is packed with unrelenting gore, and yet we continue to this day to be mesmerized by men who hack each other to death. Mundane questions arise: were these Bronze Age individuals afflicted with post traumatic stress disorder? How could women and children, as well as warriors not be traumatized by a life of episodic brutality? If they were severely damaged mentally and emotionally, how did they create a legacy of poetry, art, science and philosophy? Did these human beings inhabit a mind space that deflected trauma as if it were a rain shower? Was their literal perception of reality a type of protection?

imagesD8PA00S5riace bronze

Women will forever be drawn to the essential physicality of Homeric man. He is the original sexual male; the man whose qualities can be witnessed in the flesh. His body was a true product of nature and habit. Disfiguring scars proved his value in battle. Robust genes may have been his only participation in fatherhood.

Time and culture have produced another type of man, a supernatural creature with no marked talent, one who can offer general, but not specific, loyalty. Domestic man, propertied man, unbearably dull man, emotionally-retarded man. In his company a woman shrivels to her aptitude for patience and endurance, for heating dinner in the microwave and folding laundry. Her fate is a life of starvation.

tumblr_m5pxjtzoMB1r0ttw3o1_1280grey

Noble Penelope reduced to a neurotypical nag.

A Winter of Life Message / Who is Eckart Tolle?

Who is Eckhart Tolle? Eckhart Tolle is a German-born resident of Canada best known as the author of The Power of Now and A New Earth: Awakening to your Life’s Purpose. In 2008, a New York Times writer called Tolle “the most popular spiritual author in the United States”.   Wikipedia

I don’t know of this person: He sounds a bit “New Age-y” Lots of pithy quotes all over the internet. He’s just about my age, so that may explain why this statement  “resonates” at this point in my life, when the body we count on is well on its way to  breaking down and lurching toward the inevitable. I think the quote is wasted on young people. An act of surrender and bravery is necessary to embrace it, an act that takes a lifetime to acknowledge.

He could have said this one thing and nothing else. It really sums up what life is about. The stupid defiance of “what is” – a constant uphill trudge, battle, struggle to “become” someone -a viable, admirable sprig of life-force that makes its mark – whatever that is. In nature, all this seems automatic: mathematical, chemical, electrical life becoming, evolving – terrible in its ruthless paring down of species into improbably successful and beautiful forms – temporary, all of them. And then there is “us”.

Hell bent on defying nature: swimming upstream, spewing toxins, garbage, waste from our pretty technically savvy vehicles. Congratulating ourselves on having peanut butter in jars, mechanical eyelash curlers, fake fur garments, a gluttonous desire for pizza, remote controls for refrigerators, garage doors and the ability to spy on our children, our dogs, cats, parakeets and snakes; on our front porch deliveries, on road conditions in Zanzibar or the price of sandals in Morocco. And we’re promised / warned that there’s much more of this to come… It’s lovely and cute in a way… giving the finger to nature.

So, resistance is futile, says Mr. Tolle. But without forces to resist, would humans be human? No. But in old age it’s okay to recognize futility; to embrace the lessening need to resist anything.

This is absolutely true if you live in Wyoming…

 

 

 

 

Fate / Human maladaptation to a future world

Each of us is born into a world that is not of his or her own making. The trouble is, that it’s no longer a world that nature has prepared us for. DNA is like a suitcase full of physiological plans, functions and designs; of physics, chemistry, thermodynamics and electro- magnetic energy arranged by billions of years of testing for operational brilliance in an environment that no longer exists.

Human babies are like time travelers, adapted to a strenuous existence in forest and desert; along rivers, lakes and seashores; ready to learn, survive and excel, and to be a wild animal, among wild animals.

We arrive in a place many futures ahead of where we belong. In a hospital. Among machines, without which more and more babies would die on arrival. Not a living thing in sight. To parents whose bodies have adapted rather badly to an artificial world, not of their own making. Trapped in a world not of their own making. The dysfunction of being born into a toxic future, for which our DNA suitcase does not prepares us, accelerates, not by a few years, but thousands of years in mere generations.

The DNA suitcase is becoming useless. We don’t function; we cannot adapt; we can only maladapt.

So what do we do? A frantic response: Attack our DNA. Cut it apart, rearrange it, combine it, mix it like a salad. Hope that we can keep ahead of the future, a future in which dysfunction is normal. Are we there yet?  

____________________________________________________