Question / Is Common Sense even better than Empathy?

My posting has slowed to almost nothing since last Saturday:

Summer at last; warm winds, blue skies, puffy clouds. The dog and I are both delirious over the ability to “get out of” quasi imprisonment indoors.

Into the truck; a short drive to the south, up and over the canyon edge into the wide open space of the plateau. Out into “the world again” striding easily along a two-rut track that goes nowhere; the type that is established by the driver of a first vehicle, turning off the road, through the brush, and headed nowhere. Humans cannot resist such a “lure” – Who drove off the road and why? Maybe the track does go somewhere. And so, the tracks grow, simply by repetition of the “nowhere” pattern. Years pass; ruts widen, deepen, grow and are bypassed, smoothed out, and grow again, becoming as permanent and indestructible as the Appian Way.

This particular set of ruts is a habitual dog-walking path for me: the view, the wind, the light, the sky whipped into a frenzy of lovely clouds… and then, agony. Gravel underfoot has turned my foot, twisting my ankle and plunging me into a deep rut and onto the rough ground. Pain; not Whoops, I tripped pain, but OMG! I’m screwed pain. I make a habit of glancing a few feet ahead to check where my feet are going, but my head was in the clouds.

This isn’t the first time in 23 years that I’ve taken a fall out in the boonies: a banged up shin or knee, a quick trip to the gravel; scraped hands, even a bonk on the head, but now… can I walk back to the truck, or even stand up? One, two, three… up.

Wow! Real pain; there’s no choice. Get to the truck, which appears to be very, very far away, at this point. Hobble, hobble, hobble; stop. Don’t stop! Keep going. Glance up at the truck to check periodically to see if it’s “growing bigger” – reachable. I always tell myself the same (true) mantra in circumstances like this: shut out time, let it pass, and suddenly, there you will be, pulling open the truck door and pulling yourself inside.

There is always some dumb luck in these matters: it’s my left ankle. I don’t need my left foot to drive home. Then the impossible journey from the truck to the house, the steps, the keys, wrangling the dog and her leash, trying not to get tangled and fall again – falling through the doorway, grabbing something and landing on the couch. Now what?

That was five days ago. Five days of rolling around with my knee planted in the seat of a wheeled office chair, pushing with the right foot as far as I can go, then hopping like a  one-legged kangaroo the rest of the way. Dwindling food supplies; unable to stand to cook; zapping anything eligible in the microwave. No milk in my coffee. Restless nights. Any bump to my bandaged foot wakes me up. This is ridiculous! My life utterly disrupted by a (badly) sprained ankle. I think I’m descending into depression.

Bipedalism, of course, begins to takeover my thoughts. But first, I try to locate hope on the internet, googling “treatment for sprained ankle.” You’re screwed, the pages of entries say. One begins to doubt “evolution” as the master process that produces elegant and sturdy design. Ankles are a nightmare of tiny bones and connecting ligaments, with little blood supply to heal the damage, and once damaged, a human can expect a long recovery, intermittent swelling and inevitable reinjury, for as long as you live.

It seems that for our “wild ancestors” a simple sprain could trigger the expiration date for any individual unlucky enough to be injured: the hyenas, big cats, bears and other local predators circle in, and then the vultures. Just like any other animal grazing the savannah or born into the forest, vulnerability = death. It’s as true today as it ever was. Unless someone is there with you when you are injured, you can be royally screwed: people die in their own homes due to accidents. People die in solo car wrecks. People go for a day hike in a state park and within an hour or two, require rescue, hospitalization and difficult recovery, from one slip in awareness and focus. And, being in the company of one or more humans, hardly guarantees survival. Success may depend on their common sense.

So: the question arises around this whole business of Homo sapiens, The Social Species. There are many social species, and it is claimed that some “non-human” social species “survive and reproduce successfully” because they “travel together” in the dozens, thousands or millions and “empathize” with others of their kind. Really? How many of these individual organisms even notice that another is in peril, other than to sound the alarm and get the hell out of the danger zone or predator’s path? How one human mind gets from reproduction in massive numbers, that is, playing the “numbers game” (1/ 100, 1/100, 1, 100,000 new creatures survive in a generation), and the congregation of vast numbers in schools, flocks and the odds for “not being one of the few that gets caught and eaten” – how one gets from there to “pan-social wonderfulness” is one of the mysteries of the social human mind.

There are occasions when a herd may challenge a predator, or a predatory group; parents (usually the female), will defend offspring in varying manner and degree, but what one notices in encounters (fortuitously caught on camera, posted on the internet or included in documentaries) that solitary instances are declared to represent “universal behavior” and proof of the existence of (the current fad of) empathy in “lesser animals”. What is ignored (inattentional blindness) and not posted, is the usual behavior; some type of distraction or defensive behavior is invested in, but the attempt is abandoned, at some “common sense point” in the interaction; the parents give up, or the offspring or herd member is abandoned.

What one notices is that the eggs and the young of all species supply an immense amount of food for other species.

Skittles evolved solely as a food source for Homo sapiens children. It has no future as a species. LOL

I’ve been watching a lot of “nature documentaries” to pass the time. This is, in its way, an extraordinary “fact of nature”. Our orientation to extreme Darwinian evolution (reductionist survival of the fittest) is stunningly myopic. We create narratives from “wildlife video clips” edited and narrated to confirm our imaginary interpretation of natural processes; the baby “whatever” – bird, seal, monkey, or cute cub; scrambling, helpless, clueless, “magically” escapes death (dramatic soundtrack, breathless narration) due to Mom’s miraculous, just-in-the-nick-of-time return. The scoundrel predator is foiled once again; little penguin hero “Achilles” (they must have names) has triumphantly upheld our notion that “survival is no accident” – which in great measure is exactly what it is.

One thing about how evolution “works” (at least as presented) has always bothered me no end: that insistence that the individual creatures which survive to reproduce are “the fittest”. How can we know that? What if among the hundreds, thousands, millions of “young” produced, but almost immediately destroyed or consumed by chance, by random events, by the natural changes and disasters that occur again and again, the genetic potential “to be most fit” had been eliminated, depriving the species of potential even “better” adaptations than what those we see? We have to ask, which individuals are “fittest” for UNKNOWN challenges that have not yet occurred? Where is the variation that may be acted upon by the changing environment?

This is a problem of human perception; of anthropomorphic projection, of the unfailing insistence of belief in an intentional universe. Whatever “happens” is the fulfilment of a plan; evolution is distorted to “fit” the human conceit, that by one’s own superior DNA, survival and reproduction necessarily become fact. 

Human ankles (and many other details) of human physiology are not “great feats of evolutionary engineering.”

Like those two-rut roads that are ubiquitous where I live, chance predicts that most of evolution’s organisms “go nowhere” but do constitute quick and easy energy sources for a multitude of other organisms.

 

Genealogy of Religion / Cris Campbell

Cris Campbell holds advanced degrees in anthropology, philosophy, and law. This (WordPress) blog is his research database and idea playspace. (The most recent post seems to be in 2015, but there is plenty to explore)

 

Why “Hunter-Gatherers and Religion”?

Anyone who surveys the “religious” beliefs of hunter-gatherers (or foragers) will almost immediately discover that many of them do not have a word that translates as “religion” and do not understand the Western concept of “religion,” as explained to them by ethnographers and others.  Anyone who engages in such a survey will also soon discover that hunter-gatherers have a dazzling and sometimes bewildering array of beliefs related to the cosmos, creation, spirits, gods, and the supernatural.  Within a single group, these beliefs may be different and contradictory from individual to individual; the beliefs are often fluid and change considerably over time.  When comparing groups, the details — at least on the surface — seem to be so different that nothing general can be said about foragers on the one hand and their beliefs on the other hand.  Despite this variety, one can identify certain common themes, motifs and tropes that are characteristic of hunter-gatherer metaphysics.  These include:

  • A generalized belief in higher powers, which may be gods, spirits, or other forces; (I would modify this based on those who are visual thinkers and do not make abstract “things”)
  • A spiritualized reverence for nature and everything of nature; (what does ‘spiritualized’ entail? This is one of those Weasel Words that is never defined)
  • A cosmology oriented horizontally rather than vertically; “egalitarian”
  • A cyclic notion of time and perpetual renewal; and (or non-time, ie “living in the present”)
  • A belief array that includes animism, ritualism, totemism and shamanism. (these are all “western” inventions. The people supposedly practicing these “religions” may not see any difference or separation between these categorizations and behaviors of everyday life. There are atheist hunter-gatherers)

Because humans have been foragers for the vast majority of their time on earth, understanding the supernatural beliefs and practices of hunter-gatherers is essential to any genealogy of religion.  This Category will examine those beliefs as part of a larger effort to trace the history of religion.

How ironic! It is modern social humans who are trapped in a supernatural dimension created by “magic words”

Overview of personality “theories” + History of “personality concept” / Yikes!

https://www.simplypsychology.org/personality-theories.html

Without any attempt at addressing this enormously complex problem as a whole, it may be worthwhile to recall one recent example of interdisciplinary discussion occurring at the intersection of empirical psychology and normative ethics: a discussion of virtuous character. The latter being a paradigmatic subject matter of virtue ethics, at least from Socrates, has recently been reconsidered in the light of experimental results obtained by academic psychology. More specifically, it was related to the criticism of the concept of personality voiced mostly by social psychologists.

The conceptual and theoretical core of personality psychology, both in its scientific and folk versions (Gilbert & Malone, 1995; Ross, 1977), has usually been constructed around the notion of temporally stable and cross-situationally consistent features: so-called global or robust traits. A recent empirical tradition of situationism, however, seems to provide ample evidence not only for the fact that we are all indeed “prone to dispositionism” of this kind, but also that such “dispositionism is false” (Goldie, 2004, p. 63). The researchers from this tradition deny that there are stable and consistent traits or, alternatively, insist that most actual people don’t exhibit traits of this kind. Rather, the large body of empirical evidence (among the research most commonly discussed by virtue ethicists is that by Darley & Batson, 1973; Isen & Levin, 1972; Milgram, 1963; for a more complete review see Doris, 2002) provided shows that it is a situation in which an agent finds him/herself acting, rather than an allegedly context-independent and stable personality, that accounts for the large amount of human behavior.

The experiments conducted by social psychologists were soon generalized into doubts concerning the usefulness of trait concepts for the purposes of scientific explanation and prediction. Understood in such a context, in turn, they attracted the attention of many philosophers. The empirical results mentioned above could, indeed, have been disquieting, especially if one realized that the very center of traditional philosophical moral psychology, especially within so-called virtue ethics, had been founded on the notion of moral character with virtues and vices aspiring to exactly the same stability and cross-situational consistency that was undermined in the case of personality. Among the philosophers it was especially Gilbert Harman (1999, 2000) and John Doris (1998, 2002) who stimulated a fierce debate by stating that situationist literature posed a grave threat against “globalist moral psychologies” (Doris & Stich, 2014), and as undermining the very basis of both ancient and contemporary virtue ethics.

Such a far-reaching claim, obviously, provoked a strong response (for useful reviews see Alfano, 2013; Appiah, 2008; Goldie, 2004; Miller, 2013a). What seems to have been assumed by at least many disputants from both sides of the controversy, however, was a relatively direct applicability of psychological theses concerning personality to philosophical issues referring to character. In brief, it was the interchangeability of the notions of personality and character that had been presumed. Despite the fact that such an implicit assumption has been often made, these two notions are not identical. True, they are often used interchangeably and the difference is vague, if not obscure. Still, however, the notions in question can be distinguished from each other and the effort to draw the distinction is arguably worthwhile because of the latter’s bearing on many particular issues, including the above discussion of situationism.

One possible way of exploring the difference between these two concepts is to compare the typical, or paradigmatic, ways of their application as revealed in their respective original domains. Common language is obviously not very helpful here, as it exhibits the very same confusion that is intended to be clarified. Rather, the context of classical virtue ethics (for character) as well as that of academic personality psychology (for personality), are promising. Such a general clue will be used in the following sections. At first, the concepts of character and personality will be investigated both historically and systematically. Then, in turn, a parallel will be drawn between the pair in question and so-called fact–value distinction and an analysis of the functions played by both concepts conducted. Finally, the outcomes achieved will be placed in the context of some differences between the fact–value distinction and the Humean is–ought dichotomy.

Historical vicissitudes of the notions

In antiquity the notion of character was inseparably connected with the normative aspect of human conduct and in most contexts amounted to moral qualities of a respective person: to virtues and vices. Such a connection was emphasized in a term alternative to “character”: the Greek word “êthos” (cf. Gill, 1983, p. 472). An evaluative discourse of character can be found in common language and folk psychology (cf. Irwin, 1996), but it is its professional version proper to virtue ethics that is crucial in the present context. The latter philosophical tradition took on its classical form in Socrates and culminated with Aristotle’s (trans. 2000) Nicomachean Ethics, which to this day is a paradigmatic example of a virtue ethical account.

Ancient conceptions of character were descriptive and normative with both these features closely intertwined. They involved theories of moral and developmental psychology and, at the same time, a prescription and a detailed instruction of character education and character self-cultivation. And it was, importantly, a ‘life-long learning’ account that was provided: it was a rational adult, rather than a child deprived of genuine rationality, who was regarded by Cicero, Seneca, or Plutarch as able to accomplish “character formation through reasoned reflection and decision” (Gill, 1983, p. 470). The standards for the success of such a process were usually considered objective. In the Aristotelian context, for instance, it was the ability to properly perform human natural functions that provided the ultimate criterion.

The ancient Greek and Roman concept of character turned out to be profoundly influential in the following ages at least, as has been mentioned, until the beginnings of the previous century (for part of the story, see MacIntyre, 2013). Some of the variations on this ancient notion can be found in the Kantian ideal of the ethical personality, the German tradition of Bildung, the 19th-century American model of the balanced character and, last but not least, the Victorian vision of the virtuous character very vivid in the novels from this cultural milieu (Woolfolk, 2002). What is remarkable is that the notion of character, as influential as it used to be, is considerably much less important today. Nowadays, in fact, it seems to be mostly substituted by the concept of personality. And it is the history of the process that led to this state of affairs, of the shift “from a language of ‘character’ to a language of ‘personality’” (Nicholson, 1998, p. 52) that can be very revealing in the present context. Two particularly helpful accounts have been provided by Danziger (1990, 1997) and Brinkmann (2010).3

Danziger begins his account with an important remark that initially the notion of personality carried the meanings which were not psychological, but were theological, legal, or ethical ones. It was only as a result of considerable evolution that it “ended up as a psychological category.” The first important dimension of the process of its coming “down to earth” (1997, p. 124) was the medicalization. Danziger places the latter in 19th-century France, where medical professionals were as skeptical about the earlier theologically or philosophically laden versions of the notion as they were enthusiastic about the promises of its naturalization. It was as a result of their reconceptualization that “personality” began to refer to “a quasi-medical entity subject to disease, disorder and symptomatology” (1997, p. 131). The term understood as such won its place within medical discourse and soon, in 1885, it became possible for Théodule Ribot to publish The Diseases of the Personality without a risk of conceptual confusion. An evolution began which would later lead to the inclusion of the personality disorders into the DSM (cf. Brinkmann, 2010, p. 73).

Among the descendants of the medicalization it is arguably the mental hygiene movement, “an ideological component” (Danziger, 1990, p. 163) of the rise of contemporary research on personality, that was most important at that time. On the basis of the belief that it is an individual maladjustment rooted in early development that is responsible for all kinds of social and interpersonal problems, “a powerful and well-founded social movement” (p. 164) directed at the therapy of the maladjusted as well as at the preventive efforts addressed to the potentially maladjusted (which could include everybody), was initiated. The notion of personality, as noted by Danziger, “played a central role in the ideology” (p. 164) of this movement. More particularly, it was the “personality” of individuals addressed by the latter which was recognized as “the site where the seeds of future individual and social problems were sown and germinated” (Danziger, 1997, p. 127) and, accordingly, established as an object of intervention.

Personality understood as such needed to be scientifically measured on the dimension of its adaptation/maladaptation and it was at this place that the psychologists from the Galtonian tradition of individual differences and mental testing arrived on the scene. In fact, it could easily seem that no one was better equipped than those researchers to perform the task set by the mental hygiene movement and to provide the latter’s ideology with a technical background. Mental testing confined to cognitive abilities or intelligence at roughly the same time, i.e., after World War I, turned out to be insufficient not only as a means of occupational selection but also for its originally intended application, i.e. as a predictor of school success. In effect, there was an increasing recognition of the need for measurement techniques for non-intellectual mental qualities.

And such techniques were indeed soon developed using the very same methodological assumptions that had been previously applied to cognitive abilities. Paper-and-pencil questionnaires measuring non-cognitive individual differences “began to proliferate” (Danziger, 1990, p. 159). Simultaneously, a new field of psychological investigation, “something that could sail under the flag of science” (p. 163), began to emerge. Only one more thing was lacking and it was a label, a name for the new sub-discipline and its subject matter.

The “shortlisted” candidates included the notions of temperament, character, and personality. The first one was rejected due to its then associations with physiological reductionism. Why not “character,” then? Well, that notion in turn was considered inappropriate due to its association with the concept of will being an “anathema to scientifically minded American psychologists” (Danziger, 1997, p. 126) and generally normative connotations. The third candidate, “personality,” as a result, came to the fore.

Not only was it devoid of an unwelcome moralistic background and already popularized by the mental hygiene movement, it also offered a realistic prospect of quantitative empirical research. Already adopted by scientific medicine and understood along the lines of Ribot as an “associated whole” (un tout de coalition) of the variety of forces, personality, rather than holistic character, was a much more promising object for the post-Galtonian methodology (Danziger, 1997, p. 127; cf. Brinkmann, 2010, p. 74). Soon, the newly emerging field “developed loftier ambitions” (Danziger, 1997, p. 128) and became a well-established part of academic psychology4 with its flagship project of discovering basic, independent, and universal personality-related qualities: the traits. And it is actually this tradition that is more or less continued today, with the Big Five model being a default perspective.

Note: I would add, that moralistic social “tradition” did not disappear from “personality theory” – psychology remains a socio-religious “prescriptive and rigid” conception of human behavior, despite the effort to construct “something that could sail under the flag of science”

For the establishment of personality rather than character as a subject matter of the new psychological science, Gordon W. Allport’s importance can hardly be overestimated (Allport, 1921, 1927; cf. Nicholson, 1998). Following an earlier proposal by John B. Watson Allport drew an explicit distinction between normatively neutral personality, “personality devaluated,” and character as “personality evaluated” (Allport, 1927, p. 285). Personality and character, crucially, were regarded by him as conceptually independent. The former, in particular, could be intelligibly grasped without the reference to the latter: “There are no ‘moral traits’ until trends in personality are evaluated” (p. 285). Accordingly, an evaluation was considered as additional and only accidental. As such it was regarded as both relative and connected with inevitable uncertainty (for the cultural background and metaethical position of emotivism lying behind such an approach see MacIntyre, 2013).5

The point which is crucial here is that the recognition of the normative element of the character concept led to its virtual banishment. While listing “basic requirements in procedures for investigating personality,” Allport (1927, p. 292) was quite explicit to enumerate “the exclusion of objective evaluation (character judgments) from purely psychological method” (p. 292). Those psychologists who accept his perspective “have no right, strictly speaking, to include character study in the province of psychology” (Allport, 1921, p. 443).6

The transition from the notion of character to that of personality was a very complex process which reflected some substantial changes in cultural and social milieu. Some insightful remarks about the nature of the latter have been provided by Brinkmann’s (2010) account of the shift between the premodern “culture of character” and essentially modern “culture of personality.”This shift, importantly, was not only a “linguistic trifle.” Rather, it was strictly associated with “the development of a new kind of American self” (Nicholson, 1998, p. 52).

A culture of character, to begin with, was essentially connected with moral and religious perspectives, which provided the specification of human télos. And it was in relation to the latter that the pursuit of moral character was placed. In the paradigmatic Aristotelian account, for instance, the notion of the virtuous character was essentially functional in the same way in which the concept of a good watch is (MacIntyre, 2013). The criteria of success and failure, accordingly, were defined in terms of one’s ability to perform the natural functions of the human creature. And the latter were not “something for individuals to subjectively decide” (Brinkmann, 2010, p. 70). Rather, they were predetermined by a broader cosmic order of naturalistic or theological bent.

The goal of adjusting one’s character to suit the requirements of human nature was institutionalized in social practices of moral education and character formation. According to Brinkmann, it was especially moral treatment or moral therapy that embodied the default approach “to the formation and correction of human subjects” (2010, p. 71). This endeavor was subsequently carried on in the very same spirit, though in an essentially different cultural milieu, by William Tuke and Phillipe Pinel and it was no earlier than with Sigmund Freud that a new form of therapy, properly modern and deprived of an explicit normative background, emerged.

Note: And yet, in American psychology, it is precisely this “imaginary normal” that continues to be the default assumption against which pathology and defect are  assigned.

The ancient virtue ethical approach embodied in a culture of character was taken over by the Middle Ages, with an emphasis shifted considerably towards theological accounts of human goals. A thoroughly new perspective proper to a culture of personality appeared much later with the emergence of the scientific revolution, which seriously undermined the belief in objective normative order. The earlier cosmic frameworks started to be supplanted by psychological perspectives with romanticism and modernism being, according to Brinkmann (2010, p. 72), two forces behind them.

One of the main running threads of romanticism is the idea that “each human being has a unique personality that must be expressed as fully as possible” (Brinkmann, 2010, p. 73). Before romanticism, the final purpose had been specified in terms external to a particular individual. It was related to generic norms of humanity as such or to those determined by God. (Today, “generic norms” are determined by a “new” God: the psych industry) Now the goal to be pursued started to be understood as properly individual and unique

Note: I don’t think that Americans understand how pervasive “the shift” away from the individual being “a unique personality that must be expressed as fully as possible” to a totalitarian demand for conformity as dictated by a “new religious” tide of psycho-social tyranny was accomplished in a few decades. It is not surprising that Liberalism is every bit as religious as the Christian Right in its goal to “restore” the extreme religious aims (and hatred of humanity) of Colonial America; a continuation of the religious wars that raged in Europe for centuries.  

This difference is evident when one compares Augustine’s and Rousseau’s confessional writings. The former “tells the story of a man’s journey towards God,” whereas the latter “is about a man’s journey towards himself, towards an expression of his own personality” (Brinkmann, 2010, p. 73). (Not allowed anymore!)

The demand for the “journey towards himself” can be connected with a disenchantment of the world, which had left an individual in a universe devoid of meaning and value. If not discovered in the world, the latter needed to be invented by humans. One had to “turn inwards” in order to find the purpose of life and this entailed, significantly, the rejection of external and social forces as potentially corrupting the genuine inborn self. The idea of “an individual in relative isolation from larger social and cosmological contexts” began to prosper and it “paved the way for the modern preoccupation with personality” (Brinkmann, 2010, pp. 67, 73) defined in fully atomistic or non-relational terms.

The second major force behind a culture of personality was modernism, which, in alliance with the modern idea of science, entailed an “ambition of knowing, measuring [emphasis added], and possibly improving [emphasis added] the properties of individuals” (Brinkmann, 2010, p. 73), which proved to have a considerable bearing on the newly emerging notion of personality. The latter concept had been deeply influenced by the logic of standardization and quantification characteristic of the whole of modernity; not only of its industry, but also of education, bureaucracy, and the prevailing ways of thinking. This logic found its direct counterpart in trait-based thinking about personality with the idea that the latter can “be measured with reference to fixed parameters” and that individuals “vary within the parameters, but the units of measurement are universal” (Brinkmann, 2010, p. 75). (This assumption that “opinions that arise from a social agenda” can be quantified is disastrous.)

The romantic and modernist branches of a culture of personality, with all the differences between what they laid emphasis on, were connected by the common atomistic account of the self and a plea for the development of unique qualities of the individual. And it is this “core element” of their influence which is still in place today,8 even though some authors, Brinkmann included, have announced the appearance of a new cultural formation, a culture of identity.

The character–personality distinction

The relationship between two notions in question can be elucidated by, first, indicating their common features (genus proximum) and, then, by specifying the ways in which they differ from each other (differentia specifica). As far as the former is concerned, both “character” and “personality” can be regarded as constructs belonging to the discourse of individual differences.9 Both notions are analyzable, even if not reductively analyzable, in terms of some lower-level terms such as virtues and vices or, respectively, traits. These lower-lever concepts are usually understood as dispositional. A personality trait, for instance, can be defined as a “disposition to form beliefs and/or desires of a certain sort and (in many cases) to act in a certain way, when in conditions relevant to that disposition” (Miller, 2013a, p. 6). The higher level notions of character and personality, accordingly, are also dispositional.

The formal features indicated above are common to the notions of character and personality.10 And it is on the basis of this “common denominator” that one can attempt to clarify the difference between them. A good place to begin with is a brief remark made by Goldie (2004) who claimed that “character traits are, in some sense, deeper than personality traits, and … are concerned with a person’s moral worth” (p. 27). It is a dimension of depth and morality, then, which can provide one with a useful clue. (Note that both “traits” and moral rules are subjective, culturally defined and NOT quantifiable objects: that is, this remains a religious discussion.) 

As far as the depth of the notion of character is concerned, the concept of personality is often associated with a considerable superficiality and the shallowness of mere appearances (Goldie, 2004, pp. 4–5; Kristjansson, 2010, p. 27). The fact that people care about character, accordingly, is often connected with their attempt to go beyond the “surface,” beyond “the mask or veneer of mere personality” (Goldie, 2004, p. 50; cf. Gaita, 1998, pp. 101–102).11 Even the very etymology of the term “personality” suggests superficiality by its relation to the Latin concept of persona: “a mask of the kind that used to be worn by actors.” Character as deeper “emerges when the mask is removed” (Goldie, 2004, p. 13; cf. the Jungian meaning of persona).

The reference to the depth of character, as helpful as it may be, is certainly insufficient due to its purely formal nature. What still remains to be determined, is a substantive issue of the dimension on which character is deeper than personality. As far as Goldie’s distinction is concerned such a specification is provided in what follows: “someone’s personality traits are only good [emphasis added] conditionally upon that person also having good character traits … On the other hand, the converse isn’t true: the goodness [emphasis added] of someone’s character trait is not good [emphasis added] conditionally on his having good personality traits” (2004, p. 32). It is depth referring to ethical dimension, then, which distinguishes between character and personality.12 One’s virtue of honesty, for instance, can still be valued even if the person in question is extremely shy (introvert, as the psychologist would say). (Both introversion and “honesty” are labeled symptoms of “developmental disorder” in the ASD / Asperger diagnosis)

It does not work the other way around, though. An outgoing and charming personality, when connected with considerably bad character, is in a sense polluted. A criminal who is charming can be even more dangerous, because he/she can use the charm for wicked purposes.13 Such a difference, importantly, should not be taken as implying that personality cannot be evaluated at all. It can, with a reservation that such an evaluation will be made in terms of non-moral criteria or preferences. An extraverted person, for instance, can still be considered as a “better” or more preferable candidate for the position of talk show host (cf. Goldie, 2004, p. 47; McKinnon, 1999, pp. 61–62).

The above-given specification of the distinction can be enriched by some remarks by Gill (1983, p. 470), who notices that “character” and “personality” are not only distinguishable as two concepts but also as “two perspectives on human psychology” for which they are, respectively, central. The character-viewpoint, to begin with, “presents the world as one of … performers of deliberate actions” (Gill, 1986, p. 271). Human individuals, in particular, are considered as more or less rational and internally consistent moral agents possessing stable dispositions (virtues and vices) and performing actions which are susceptible to moral evaluation and responsibility ascription. The evaluation of their acts, importantly, is believed to be objective: to be made along the lines of some definite “human or divine standards” (p. 271). No “special account,” accordingly, is taken “of the particular point of view or perspective of the individuals concerned” (Gill, 1990, p. 4).

The personality-viewpoint, on the other hand, is not associated with any explicitly normative framework. Rather, it is colored by “the sense that we see things ‘as they really are’ … and people, as they really are” (Gill, 1986, p. 271). The purposes are psychological, rather than evaluative: to understand, empathize with, or to explain. Also the default view of the individuals in question is considerably shifted. Their personality is recognized as being “of interest in its own right” (Gill, 1983, p. 472) and their agency as considerably weakened: “The person is not typically regarded as a self-determining agent,” but rather as a “relatively passive” (p. 471) individual often at the mercy of the forces acting beyond conscious choice and intention. The unpredictability and irrationality entailed by such a view is substantial.

To sum up the points made above, it may be said that while both “character” and “personality” belong to the discourse of individual differences, only the former is involved in the normative discourse of person’s moral worth and responsibility. The thesis that the notion of character, but not that of personality, belongs to the discourse of responsibility should be taken here as conceptual. What is claimed, in particular, is that linguistic schemes involving the former notion usually involve the notion of responsibility as well and allow us to meaningfully hold somebody responsible for his/her character. Language games involving both concepts, in other words, make it a permissible, and actually quite a common, “move” to be made. Whether and, if yes, under what circumstances such a “move” is metaphysically and ethically justified is a logically separate issue, which won’t be addressed here.

In those accounts in which the connection between character and responsibility is considered stronger, i.e., as making responsibility claims not only conceptually possible but also justified, a separate account of responsibility is needed (e.g., Miller, 2013a, p. 13). One possible ground on which such an account can be developed is the relationship between character and reasons (as opposed to mere causes). Goldie (2004), for instance, emphasizes the reason-responsiveness of character traits: the fact that they are dispositions “to respond to certain kind of reasons” (p. 43). Actually, he even defines a virtue as “a trait that is reliably responsive to good reasons, to reasons that reveal values” (p. 43, emphasis removed; cf. the definition by Miller, 2013b, p. 24). A vice, accordingly, would be a disposition responsive to bad reasons.

Whether all personality traits are devoid of reason-responsiveness is not altogether clear (cf. Goldie, 2004, p. 13). For the notion of personality proper to academic psychology the answer would probably depend on a particular theoretical model employed. There would be a substantial difference, for instance, between personality understood, along the behavioristic lines, as a disposition to behavior and more full-fledged accounts allowing emotional and, especially, cognitive dispositions. What seems to be clear is the importance of reason-responsiveness for character traits.

The fact–value distinction is usually derived from some remarks in David Hume’s (1738/2014, p. 302) Treatise of Human Nature, in which the idea of the logical distinctiveness of the language of description (is) and the one of evaluation (ought) was expressed. A relatively concise passage by Hume soon became very influential and gave birth not only to a distinction, but actually to a strict dichotomy between facts and values (cf. Putnam, 2002). A methodological prescription “that no valid argument can move from entirely factual premises to any moral or evaluative conclusion” (MacIntyre, 2013, p. 67) was its direct consequence.

In order to refer the above dichotomy to the notions of character and personality, it may be helpful to remember Allport’s (1921) idea of character being “the personality evaluated according to prevailing standards of conduct” (p. 443). A crucial point to be made here is that the act of evaluation is considered as an addition of a new element to an earlier phenomenon of personality, which can be comprehended without any reference to normativeness. The latter notion, in other words, is itself morally neutral: “There are no ‘moral traits’ until trends in personality are evaluated” (Allport, 1927, p. 285).

The thesis that personality can be specified independently of character or more generally, without any application of normative terms, is of considerable importance because it illustrates the fact that the character–personality distinction logically implies the fact–value one. The validity and the strictness of the former, in consequence, rely on the same features of the latter. Character and personality, in brief, can be separated only as long as it is possible to isolate personality-related facts from character-related values.

Such dependence should necessarily be referred to contemporary criticism of the fact–value distinction (e.g., MacIntyre, 2013; Putnam, 2002; cf. Brinkmann, 2005, 2009; Davydova & Sharrock, 2003). This criticism has been voiced from different perspectives and involves at least several logically distinct claims. For the present purposes, however, it is an argument appealing to so-called thick ethical concepts14 and the fact–value entanglement that is of most direct significance.

The distinction between thick and thin ethical concepts was first introduced (in writing) by Bernard Williams (1985/2006)15 and subsequently subjected to intense discussion (for useful introductions see Kirchin, 2013; Roberts, 2013; applications for moral psychology can be found in Fitzgerald & Goldie, 2012). What is common to both kinds of concepts is that they are evaluative: they “indicate some pro or con evaluation” (Kirchin, 2013, p. 5). Thick concepts, furthermore, are supposed to provide some information about the object to which they refer (information, which thin concepts do not provide). They have, in other words, “both evaluative conceptual content … and descriptive conceptual content … are both evaluative and descriptive” (Kirchin, 2013, pp. 1–2). If I inform somebody, for instance, that person A is good and person B is courageous, it is obvious that my evaluation of both A and B is positive. At the same time, however, the person informed doesn’t seem to know much about a good (thin concept) person A, whereas he/she knows quite a bit about a courageous (thick concept) person B.

The significance of thick concepts for philosophical discussion is usually connected with some “various distinctive powers” they supposedly possess. More specifically, when they are interpreted along the lines of a so-called non-reductive view they seem to have “the power to undermine the distinction between fact and value” (Roberts, 2013, p. 677).16 The non-reductive position is usually introduced as a criticism of the reductive idea that thick concepts “can be split into separable and independently intelligible elements” (Kirchin, 2013, p. 8; cf. the idea of dividing character into two parts mentioned above) or, more specifically, explained away as a combination of (supposedly pure) description and thin evaluation. If such a reduction was successful thick concepts would turn out to be derivative and lacking philosophical importance.

Many authors, however, including notably Williams (1985/2006), McDowell (1981), and Putnam (2002), claim that no such reductive analysis can be conducted due to the fact–value entanglement characteristic of thick concepts. The latter, as is argued, are not only simultaneously descriptive and evaluative, but also “seem to express a union of fact and value” (Williams, 1985/2006, p. 129). The fact–value entanglement proper to thick concepts becomes apparent if one realizes that any attempt to provide a set of purely descriptive rules governing their application seems to be a hopeless endeavor. One cannot, for instance, develop a list of necessary and jointly sufficient factual criteria of cruelty.17 It is obviously possible “to describe the pure physical movements of a torturer without including the moral qualities” (Brinkmann, 2005, p. 759), but it would yield a specification which comes dangerously close to the description of some, especially unsuccessful, surgical operations. In order to convey the meaning of the word “cruelty” (and to differentiate it from the phrase “pain-inflicting”) one needs to refer to values and reasons (rather than facts and causes only). An evaluative perspective from which particular actions are recognized as cruel, accordingly, must be at least imaginatively taken in order to grasp the rationale for applying the term in some cases, but not in others. Communication using thick concepts, as a result, turns out to be value-laden through and through.

The above-given features assigned to thick concepts by the non-reductionists are crucial due to the fact that they cannot be accounted for within the framework of the fact–value distinction. As such they are often believed to “wreak havoc” (Roberts, 2013, p. 678) with the latter or, more precisely, to undermine “the whole idea of an omnipresent and all-important gulf between value judgments and so-called statements of fact” (Putnam, 2002, p. 8).

The undermining of the sharp and universal dichotomy between facts and values has a very direct bearing on the character–personality distinction being, as emphasized above, dependent on the former. A crucial point has been made by Brinkmann who noticed that almost “all our words used to describe human action are thick ethical concepts” (2005, p. 759; cf. Fitzgerald & Goldie, 2012, p. 220). And the same applies to the language of character which, contrary to Allport’s expectations, cannot be neatly separated into the factual core of personality and the normative addition. The distinction between the notions of character and personality, in consequence, even though often applicable and helpful, cannot be inflated into a sharp dichotomy.

Having analyzed the reliance of the character–personality distinction on the dichotomy between, respectively, value and fact, it becomes possible to carry out the second detailed investigation devoted to the functions played by the two concepts scrutinized. A good starting point for this exploration may be a remark made by Goldie (2004) who, while discussing the omnipresence of the discourse of personality and character, noticed that it is “everywhere largely because it serves a purpose: or rather, because it serves several purposes [emphasis added]” (p. 3). These functions merit some closer attention because they can help to further specify the difference between the concepts investigated.

The purposes served by the discourse of individual differences have been briefly summarized by the abovementioned author when he said that we use it “to describe people, to judge them, to enable us to predict what they will think, feel and do, and to enable us to explain their thoughts, feelings and actions” (and to control, manipulate and abuse them) (Goldie, 2004, pp. 3–4; cf. an analogous list provided by Miller, 2013b, pp. 12–13). Some of these functions are common to the notions of character and personality. Some others, however, are proper to the concept of character only.

The first of the common functions is description. The language of character and personality can serve as a kind of shorthand for the longer accounts of the actions taken. When asked about the performance of a new employee, for instance, a shift manager can say that he/she is certainly efficient and hard-working (rather than mention all particular tasks that have been handled). Similarly, if we say that A is neurotic, B is extraverted, C is just, and D is cruel, we do convey some non-trivial information about A, B, C, and D, respectively (even though our utterances may include something more than a mere description).

The second of the purposes that can be served by both concepts is prediction. We may anticipate, for example, that neurotic A will experience anxiety in new social situations. Despite the fact that such a prediction will be inevitably imprecise and fallible, it does enable us to narrow down “the range of possible choices and actions” (Goldie, 2004, p. 67) we can expect from a particular agent.

In fact, predictions regarding human behavior are notoriously inaccurate “guesses” – note the inability of the Psych Industry to identify mass shooters before they act.) 

The notions of character and personality, furthermore, can be employed as a means of judgment. At this point, however, an important qualification needs to be made. If this function is to be assigned to both concepts it can be understood only in a weak sense of judging as providing an instrumental assessment. The ascription of personality traits of neuroticism and extraversion to A and B, respectively, can be used to say that A would not make a good candidate for an assertiveness coach, whereas B may deserve a try in team work. It falls short, however, of any moral judgment, which can be made only by means of character-related notions.

The concepts of personality and character, finally, can both be used to provide explanation. We can, for instance, say that C was chosen as a team leader because he/she is just and expected to deal fairly with potential conflicts. Having assigned an explanatory role to “character” and “personality,” however, one should necessarily remain fully aware of the experimental results reported in the first section. An appeal to “character” and “personality” as explanatory constructs does not have to mean that they provide the whole explanation. Situational factors still count and, as a matter of fact, one may need to acknowledge that in a “great many cases … [they] will be nearly the entire explanation” (Kupperman, 1991, p. 59).

One other reservation concerns the kind of explanation conveyed by the personality- or character-trait ascription. Human behavior, in particular, can be explained in at least two distinct ways (e.g., Gill, 1990, p. 4). Explanation, to begin with, can be made along the lines of the deductive-nomological model and refer to causes and natural laws. In such cases it is not substantially different from explanations of natural facts (like an earthquake) offered by the sciences. And it is this kind of explanation that is provided when non-reason-responsive features of personality are appealed to (cf. Goldie, 2004, p. 66).

Human action, however, can be also made comprehensible by the reference to reasons behind it. If we know what a person in question “values or cares for,” in particular, we can “make sense [emphasis added] of the action, or make the action intelligible, understandable or rational [emphasis added]” (Goldie, 2004, p. 65). Such an “explanation” can be given by the indication of only those traits, which are reason-responsive and, strictly speaking, is much closer to Dilthey’s (1894/2010) understanding (Verstehen) than to naturalistically understood explanation (Erklären).

The functions of description, prediction, instrumental assessment, and explanation (at least as far as the latter is understood in terms of causes) are common to both concepts of “personality” and “character.” The latter notion, however, can serve some additional purposes, which give it a kind of functional autonomy. Among the character-specific functions, to begin with, there is moral judgment. When we say that C is just and D is cruel we don’t make an instrumental and task-relative assessment. Rather, we simply evaluate that C is a morally better person than D (other things being equal). With this, the function of imposing moral responsibility is often connected. The issue of the validity of such an imposition is very complex and controversial. Still, it does remain a discursive fact that the claim that D is cruel is usually associated with holding D, at least to some extent, responsible for his/her cruelty.

Note that “pathological, disordered, mentally ill, socially defective” are labels every bit as “moral / judgmental” in the “real social environment” as  “sinful, perverted, possessed by demons, Godless atheist, or an “agent of Satan” 

The functions of moral judgment and moral responsibility ascription are not typically served by the scientific notion of personality. They may, however, become formally similar to description, explanation, and prediction if they are, as is often the case, applied within mostly third-personal language (as judging others and imposing responsibility on others). Apart from these functions, however, the notion of character can fulfill some essentially first-personal kind of purposes. And it is the latter that seems to be its most specific feature.

Among the first-personal functions ofcharacter,” identification is fundamental, both psychologically and conceptually. When a person identifies with a character trait or, more holistically, with a complete character ideal, she begins to consider such a trait or character as a part of her identity (cf. Goldie, 2004, pp. 69–70): as something she “decides to be or, at least, to see herself as being” (Kupperman, 1991, p. 50). Such an identification, if serious, is very rich in consequences: it establishes “the experienced structure of the world of direct experience as a field of reasons, demands, invitations, threats, promises, opportunities, and so on” (Webber, 2013, p. 240) and helps one to achieve a narrative unity of one’s life (cf. Goldie, 2004; Kuppermann, 1991; McKinnon, 1999).

First-personal functions of the character notion, additionally, enable the agent to undertake more specific self-formative acts such as evaluating oneself against the idealized self, structuring moral progress, or providing motivation needed to cope with the difficulties of moral development. The notion of character employed in such a way becomes a kind of an internalized regulative ideal with a considerable emotional, imaginative, and narrative dimension. Its specific purposes are self-evaluative, self-prescriptive, and self-creative (rather than descriptive, predictive, and explanatory). The criteria of its assessment, accordingly, should be at least partially independent from those proper to strictly scientific constructs.

The latter fact, as may be worthwhile to mention, has a direct bearing on the challenge of situationism mentioned at the beginning of these analyses. The arguments in favor of this disquieting position have typically referred to experiments indicating that situational variables possess much bigger explanatory and predictive value than those related to personality and concluded that the usefulness of the personality concept needs to be seriously questioned. The doubts concerning the notion of character usually followed without further ado. No special attention, in particular, was paid to the assumption that the concepts of character and personality fulfill the same functions of description, explanation, and prediction. Accordingly, it was usually taken for granted that the failure of the latter concept automatically entails the uselessness of the former.18 As far as it is admitted that such an approach is at least partially erroneous, it may be worthwhile to refocus the debate towards the specific, first-personal, and normative functions of the notion of character. Do we need the latter to perform them and, if so, does this notion really serve us well, even though it is scientifically weak?

Some final remarks

An important clarification, however, that needs to be made here is that any skepticism concerning the fact–value dichotomy suggested by some features of thick concepts should not be conceived by the psychologists as a call to develop a prescriptive and moralistic science of character and, thus, to become “like priests” (too late: this is where American Psychology stands today)  (Charland, 2008, p. 16). A false impression that it is the case might result from the conflation between the full-fledged version of the fact–value distinction and the original, and relatively modest, Humean dictum that “no valid argument can move from entirely factual premises to any moral or evaluative conclusion” (MacIntyre, 2013, p. 67).19

That it is the latter that most psychologists care about can be clearly seen in two recent papers by Kendler (1999, 2002), who issues a stern warning that any psychological project developed along the lines of what he calls “the enchanted science”20 and motivated by the belief that psychology itself can discover moral truths can lead not only to Gestalt psychology’s holism or humanistic psychology, but also to the quasi-scientific justification of “Nazi and Communist ideology” (1999, p. 828). And it is in order to prevent these kinds of abuses that Kendler (1999) refers to what he calls “the fact/value dichotomy” or “an unbridgeable chasm between fact and values” (p. 829). By this, however, he does not seem to mean anything more than that “empirical evidence can validate factual truth but not moral truth” (p. 829). An example he provides considers the possibility of obtaining reliable empirical data supporting the thesis that bilingual education is advantageous for ethnic identification, but disadvantageous for academic development. Such data, as he rightly insists, would still leave it to the society to decide which value, ethnic identification or academic progress, should be given priority.

All of this, however, does not need to lead one to the acceptance of the fact–value dichotomy in the strong version that has been criticized by Putnam, McDowell, and others. Rather, it is the is–ought dichotomy which seems to be sufficient. The subtle differences between these two distinctions have been clarified by Dodd and Stern-Gillet (1995) who argue that the Humean dictum can be best understood as a general logical principle without any substantive metaphysical dimension of the kind usually connected with the fact–value dichotomy. That the is–ought gap is narrower and weaker is also illustrated by the fact that it is confined to “ought” statements with a considerable amount of other evaluative statements left aside. The examples provided by the authors are the aesthetic language of art and, importantly, the virtue ethical discourse of character. And as the ascription of beauty to a painting does not automatically entail any particular prescription,21 so does the assignment of courage or foolishness to a person. Even though such a feature of the characterological language has often happened to be conceived as a weakness within metaethical contexts, it can be arguably beneficial to all those psychologists who want to study the complexities of character without making an impression that any particular normative position can be derived from purely scientific study. A substantial amount of normativity, as shown by the example of thick concepts, will obviously remain inevitable, but it is certainly worthwhile to emphasize that it is mostly placed before empirical research as an evaluative framework taken from elsewhere and, thus, subjected to criteria and authorities of a non-empirical nature.

This paper has been written during a visit to Oxford University’s Faculty of Philosophy. I am greatly indebted to Edward Harcourt for all his help and support.

Consciousness / A Damaged Word – plus other important terms

Language has a problem: words, even those meant to have specific definitions and uses, gather extra meanings once “let loose” in different environments, including academia, popular conversation, and ethnic, religious, and social groups. Words can become so degraded that they no longer have a specific (or even consistent) meaning and must be re-evaluated.

Conscious(ness) is one of those words.

Human beings are severe hoarders – any and every idea is saved, whether valid, nonsensical, or incomprehensible. Archaic ideas are held to be as true or accurate as modern knowledge. The result is that human thoughts, from the confused and valueless, to the sublime and revolutionary, are a tangle of debris, like that of a  Tsunami that collects everything in its path. And now that we have the Internet, no one is cleaning up the clogged beaches.

Any discussion of “being conscious” must first define what “being conscious” is, but few writers bother to do this. I think that an individual animal (human) is either conscious or not. Qualifiers such as “partially conscious” or “levels of consciousness” demonstrate that we don’t have a clear definition or understanding of being conscious.

If we want to make progress in the study of human behavior, we must strip away the overburden of “supernatural and archaic” deposits that murkify the idea of a “conscious state.” There needs to be a valid intellectual scaffold on which to arrange concrete evidence. I don’t care how in love with psycho-babble our culture is, consciousness must be rooted in physical reality.

Humans not only hoard objects, we hoard ideas that no have no purpose other than screwing up our lives.

Humans not only hoard objects, we hoard ideas that clutter and devalue our thinking.

A short list of terms that I use in evaluating information.

Natural: Having a real or physical existence as opposed to one that is supernatural, spiritual, intellectual, or fictitious.

Supernatural: A being, object, location, concept or event that exists outside physical law: a dimension that exists solely in the human mind. 

Religion: The ritual presentation of the culture myth that includes the —-“isms” Patriotism, Consumerism, Nazism, Militarism, Capitalism etc. (From Joseph Campbell)

Mind: The sum of an organism’s or group’s reactions to the environment. Instinct is the source of automatic reactions; other reactions may be learned. So-called “emotion” is a physiologic response to the environment and belongs to mind.

Culture: The sum of an organism’s or group’s interactions with the environment. These interactions may be instinctual, learned or invented.

Mind and culture are not exclusive to Humans. Bacteria react to, and interact with the environment.

The criteria that I use to define mind and culture removes the “supernatural” barrier between our species and what is referred to as “lower animals” or “the rest of life” or plants, and all that “alien” stuff such as fungus, which do react and interact with the environment in amazing ways and therefore possess mind and culture.

Consciousness is the use of verbal language to process and communicate information. (Not limited to other humans; we talk to anything alive or dead.)

This definition recognizes consciousness as a process; it is not a “thing” – not a bump on the brain nor a nebulous supernatural fog. This definition frees us to talk about the characteristics of human consciousness, without having to project our type of verbal consciousness onto other life forms. It also recognizes nonverbal communication and the ALTERNATE states produced by using other languages –  music / mathematics / visual-spatial and other languages of which we are unaware.  These other brain processes require new definitions and terms. Individuals whose primary communication is by means of mathematics / music surely experience brain states not available to concrete visual thinkers like me.

Conscious does not = self aware. Animals such as apes or dolphins are self aware as demonstrated by the mirror trick, but as to what subjective state occurs when they use their languages, we are not in a position to know. Their languages surely convey information, but their subjective experience is outside our knowing.

The Psych Industry, Pop-Science and abstract thinking

How is the concept of abstract thinking used in “the helping, caring, fixing” industry, which claims to understand and describe THINKING as a human behavior?

Quotes from psych and other sources: (Very far removed from any “coherent” definition: ridiculous, actually.)

Abstract Thinking

“Abstract thinking describes thoughts that are symbolic and conceptual and not concrete or specific. Concrete thinking focuses on the present or here and now specificity (facts and specific objects exist temporarily, but thankfully for NTs, they vanish in a nanosecond) while abstract thinking is based on concepts, principles, and relationships between ideas and objects.”

“For example, a statement derived from concrete thinking would be “There are 3 dogs.” An abstract perspective could be thinking about numbers, different types of dogs, how some animals are pets, or how wolves and dogs are related. Young children are essentially just concrete thinkers – abstract thinking develops with age.” (Or doesn’t)

How about this gem”? 

1. Concrete thinking does not have any depth. It just refers to thinking in the periphery. On the other hand, abstract thinking goes under the surface.
2. Concrete thinking is just regarding the facts. On the other hand abstract thinking goes down below the facts.
3. Abstract thinking may be referred to the figurative description whereas the concrete thinker does not think so.
4. Unlike the concrete thinking, abstract thinking involves some mental process. (Unlike concrete thinking, which originates in the spleen)
5. A person with concrete thinking does not think beyond the facts. They do not have the ability to think beyond a certain limit. (The supernatural delusion that there is a magic “space” behind, above, outside reality, which contains, a priori, all the nonsense that the NT brain is capable of generating) 
6. When compared to concrete thinking, abstract thinking is about understanding the multiple meanings.
7. While abstract thinking is based on ideas, concrete thinking is based on what the person sees as well as the facts.

Following is by an Asperger. Note that concrete vs. abstract doesn’t enter the picture; accurate use of language (and self-knowledge) is stressed, and also visual processing. 

I feel that the whole empathy thing is an example of the danger of NT language. The concept is that autistics do not intuitively know what NTs are thinking and feeling and do not automatically share those thoughts and feelings with NTs. Same thing happens in the opposite direction. But NT language has turned this concept of empathy into the word “empathy” which has become {equivalent to} or more like {made an umbrella for} the word “compassion” and the phrase “caring about people” and the phrase “ability to love”, all of which words or phrases describe different concepts, but all the different concepts subsumed under this one word “empathy”, such that the simple concept of lacking empathy has come to mean also lacking compassion, caring about people, being able to love people. But in reality, each concept is like a different big giant chemical structure, but all these structures are being given the same verbal label by NTs, who see the world in lower resolution than autistics do and therefore habitually apply low-resolution verbal labels to cover all manner of distinct structures, or concepts.

In autistic language, this conflation would be harder to make, because instead of applying this generalized highly abstract verbal label “empathy”, autistics would just say, more explicitly and concretely, “I don’t know, automatically and instantly, what you are thinking and feeling, and I don’t share your thoughts and feelings, because the same stimuli generate different responses in me vs. you, so you’re going to have to explain your perspective for me to have a theoretical understanding of it (the catch here for NTs is that they may not be able to explain their behavior, thinking or “feelings” at all; may not understand their own “state of mind” because they have never thought it through! They have been “taught” all their lives that the “shallow social formulas” that they obey are the only possible and correct reactions.) … and I will explain mine to you afterwards, because guess what, I do want to know your perspective, because I do care about you and therefore want you to feel happy as much of the time as possible, and the first concept I talked about explicitly was what you call ’empathy’ and the second concept that I talked about explicitly was ‘caring about people’.”

From two psychology websites:

Jean Piaget uses the terms “concrete” and “formal” to describe the different types of learning. Concrete thinking involves facts and descriptions about everyday, tangible objects, while abstract (formal operational) thinking involves a mental process. 

Concrete idea Abstract idea
Dense things sink. It will sink if its density is greater than the density of the fluid.
You breathe in oxygen and breathe out carbon dioxide. Gas exchange takes place between the air in the alveoli and the blood.
Plants get water through their roots. Water diffuses through the cell membrane of the root hair cells.

Would someone please explain to me how the phrases in the “right column” “involve a mental process” (and where does concrete thinking take place, in the feet? In a cabinet? On the moon?) when these are only more detailed descriptions of concrete objects doing concrete things?

These are abstract formulas.

_______________________________________________________________________________________________

Abstract thinking is the ability to think about objects, principles, and ideas that are not physically present. (Where are they?) It is related to symbolic thinking, which uses the substitution of a symbol for an object or idea. (A dove means peace)

A variety of everyday behaviors constitute abstract thinking. These include:

  • Using metaphors and analogies;
  • Understanding relationships between verbal and non-verbal ideas;
  • Spatial reasoning and mentally manipulating and rotating objects;
  • Complex reasoning, such as using critical thinking, the scientific method, and other approaches to reasoning through problems.

How Does Abstract Reasoning Develop?
Developmental psychologist Jean Piaget argued that children develop abstract reasoning skills as part of their last stage of development, known as the formal operational stage. This stage occurs between the ages of 11 and 16. (Really? Or is this over-generalization?) Yes… However, the beginnings of abstract reasoning may be present earlier, and gifted children frequently develop abstract reasoning at an earlier age. Some psychologists have argued that the development of abstract reasoning is not a natural developmental stage. Rather, it is the product of culture, experience, and teaching.

Children’s stories frequently operate on two levels of reasoning: abstract and concrete. The concrete story, for example, might tell of a princess who married Prince Charming, while the abstract version of the story tells of the importance of virtue and working hard. (Is this really abstract thinking, or delivery of a “hidden” socio-cultural message?) While young children are often incapable of complex abstract reasoning, they frequently recognize the underlying lessons of these stories, indicating some degree of abstract reasoning skills. (Abstract reasoning leads to “getting the social message…”)

Abstract Reasoning and Intelligence
Abstract reasoning is a component of most intelligence tests. Skills such as mental object rotation, mathematics, higher-level language usage, and the application of concepts to particulars all require abstract reasoning skills. Learning disabilities can inhibit the development of abstract reasoning skills. People with severe intellectual disabilities may never develop abstract reasoning skills, and may take abstract concepts such as metaphors and analogies literally.

WOW! Here we have the CONFLATION of “abstract reasoning” (undefined) with intelligence (also undefined), which is limited to a “grab bag of skills” – from the visual manipulation of objects (is visual-spatial mental activity the same as abstract thinking, or is it sensory thinking? ) to maths (much of which is follow-the-rules grunt-work) to “high level language usage” (language use based on social judgement as dictated by the Top o’ the Pyramid folks). 

From a teacher resource center:

WHAT ARE CONCRETE AND ABSTRACT THINKING?

Abstract thinking is a level of thinking (thinking is a pyramid built of a hierarchy of types of thinking) about things that is removed from the facts of the “here and now” (facts only exist in the present? Bizarre!), and from specific examples of the things or concepts being thought about. Abstract thinkers are able to reflect on events and ideas, and on attributes and relationships separate from the objects that have those attributes or share those relationships. Thus, for example, a concrete thinker can think about this particular dog; a more abstract thinker can think about dogs in general. A concrete thinker can think about this dog on this rug; a more abstract thinker can think about spatial relations, like “on” (and a concrete thinker can’t use a preposition such as “on”? This is bizarre.) 

See: https://en.oxforddictionaries.com/grammar/word-classes-or-parts-of-speech)

A concrete thinker can see that this ball is big; a more abstract thinker can think about size in general. A concrete thinker can count three cookies; a more abstract thinker can think about numbers. A concrete thinker can recognize that John likes Betty; a more abstract thinker can reflect on emotions, like affection. (So abstract thinking requires an activity called reflection? Definition?)

Another example of concrete thinking in young children is a two or three year old who thinks that as long as he stays out of his bedroom, it will not be bed time. In this case, the abstract concept of time (bedtime) is understood in terms of the more concrete concept of place (bedroom). The abstract idea of bedtime comes to mean the concrete idea of being in my bedroom.

Wow! I’ve noticed something very strange.  

In myriad examples of “supposed abstractions”, the mistake is made of confusing  “non-time dependent” abstractions, like the mass, density, volume formulas above, with the crazy notion that abstractions do not occur, and are not applicable, in the present, which tolerates the very temporary existence of “facts”. This is utterly “NT” bizarre; NTs fear facts. I suppose banishing them from the past and future makes facts less scary? Wait a second, and they will magically “go away”… 

Another example that applies to two or three year olds is the following. One of the favorite Dr. Seuss books is Green Eggs and Ham, which ends with the narrator changing his mind from rejecting green eggs and ham under any circumstances to trying them and actually liking them. At a concrete level of understanding, the story is about a stubborn person changing his mind. At a more abstract level of understanding, it is about people in general being capable of modifying their thoughts and desires even when they are convinced that they cannot or do not want to do so. This more abstract level of understanding can be appreciated by two and three year old children only if the higher level of meaning comes out of a discussion of the book with a more mature adult. At older ages and higher levels of thinking, this same process of more mature thinkers facilitating higher levels of abstraction in less mature thinkers characterizes the process of teaching abstract thinking. For example, this is how great philosophers, like Socrates and Plato, taught their pupils how to think abstractly.

WOW!

So abstract thinking is “a higher level of thinking” (there goes most of applied science and engineering; most skills; most technology; most human creativity – making art, music and performing dance, and innovation of any “concrete object” of value into the “trash bin” of low-level thinking).

It is suspicious that “abstract thinking” is represented as providing a higher level of meaning; in this context, higher level of meaning = “the social message of obedience” and abstract = hidden or deceptive.

Be a good girl or boy: Eat your eggs and ham, even if they are covered in green mold that will poison you.

I’ve given myself a headache, again…

 

 

 

 

 

 

Just what is the problem between Asperger types and Neurotypicals?

I’ve been posting for three years now on the bizarre insistence by neurotypicals  that the very existence of Asperger types is an affront to “their species.” I’ve also tried to convey how the myriad ridiculous, destructive and irrational things that NTs “believe and do” drive us equally batty. The details of this stupid situation are mind-boggling and confounding, but there is one simple difference in motivation that lies at the bottom of all this “blah, blah.”

Neurotypicals do whatever makes them feel good; they will “believe in” whatever cruel and idiotic nonsense gives them permission to do whatever makes them feel good.  

Of course, 7 billion people doing / believing whatever makes them feel good inevitably creates conflict. It also makes solving problems impossible; the “non-solution” is application of force and violence. The prime NT commandment is: “Destroy whoever doesn’t do or say what makes you feel good.

This makes us avoid NTs, because the need to eradicate any and all opposition makes them  dangerous.  

Asperger types are interested in how the universe  works, whether or not the “discovery” of how things work makes us feel good or not. Why? Because knowing how things work allows for making things better.

The result is that we contradict what NTs must be told (or else!), which is, “Yes, you’re right; the universe and everything in it exists to make you feel good. I am your slave.”

 

 

 

 

 

Hunter-gatherers have a special way with smells / Study

=Max Planck Institute for Psycholinguistics

http://www.mpi.nl/news/hunter-gatherers-have-a-special-way-with-smells

When it comes to naming colors, most people do so with ease. But, for odors, it’s much harder to find the words. One notable exception to this rule is found among the Jahai people, a group of hunter-gatherers living in the Malay Peninsula who can name odors just as easily as colors. A new study by Asifa Majid (Radboud University and MPI for Psycholinguistics) and Nicole Kruspe (Lund University) suggests that the Jahai’s special way with smell is related to their hunting and gathering lifestyle.

“There has been a long-standing consensus that ‘smell is the mute sense, the one without words,’ and decades of research with English-speaking participants seemed to confirm this,” says Asifa Majid of Radboud University and MPI for Psycholinguistics. “But, the Jahai of the Malay Peninsula are much better at naming odors than their English-speaking peers. This, of course, raises the question of where this difference originates.”

Hunter-Gatherers and horticulturalists

To find out whether it was the Jahai who have an unusually keen ability with odors or whether English speakers are simply lacking, Majid and Nicole Kruspe (Lund University, Sweden) examined two related, but previously unstudied, groups of people in the tropical rainforest of the Malay Peninsula: the hunter-gatherer Semaq Beri and the non-hunter-gatherer Semelai. The Semelai are traditionally farmers, combining shifting rice cultivation with the collection of forest products for trade.

The Semaq Beri and Semelai not only live in a similar environment; they also speak closely related languages. The question was: how easily are they able to name odors? “If ease of olfactory naming is related to cultural practices, then we would expect the Semaq Beri to behave like the Jahai and name odors as easily as they do colors, whereas the Semelai should pattern differently,” the researchers wrote in their recently published study in Current Biology. And, that’s exactly what they found.

Testing color- and odor-abilities

Majid and Kruspe tested the color- and odor-naming abilities of 20 Semaq Beri and 21 Semelai people. Sixteen odors were used: orange, leather, cinnamon, peppermint, banana, lemon, licorice, turpentine, garlic, coffee, apple, clove, pineapple, rose, anise, and fish. For the color task, study participants saw 80 standardised color chips, sampling 20 equally spaced hues at four degrees of brightness. Kruspe tested participants in their native language by simply asking, “What smell is this?” or “What color is this?”

The results were clear. The hunter-gatherer Semaq Beri performed on those tests just like the hunter-gatherer Jahai, naming odors and colors with equal ease.The non-hunter-gatherer Semelai, on the other hand, performed like English speakers. For them, odors were difficult to name. The results suggest that the downgrading in importance of smells relative to other sensory inputs is a recent consequence of cultural adaption, the researchers say. “Hunter-gatherers’ olfaction is superior, while settled peoples’ olfactory cognition is diminished,” Majid says.

They say the findings challenge the notion that differences in neuroarchitecture alone underlie differences in olfaction, suggesting instead that cultural variation may play a more prominent role. They also raise a number of interesting questions: “Do hunter-gatherers in other parts of the world also show the same boost to olfactory naming?” Majid asks. “Are other aspects of olfactory cognition also superior in hunter-gatherers,” for example, the ability to differentiate one odor from another? “Finally, how do these cultural differences interact with the biological infrastructure for smell?” She says it will be important to learn whether these groups of people show underlying genetic differences related to the sense of smell.

This study was funded by The Netherlands Organisation for Scientific Research as well as the Swedish Foundation.

Publication

Majid, A., & Kruspe, N. (2018). Hunter-gatherer olfaction is special. Current Biology. DOI: 10.1016/j.cub.2017.12.014

 

Every Asperger Needs to Read this Paper! / Symptoms of entrapment and captivity

Research that supports my challenge to contemporary (American) psychology that Asperger symptoms are the result of “captivity” and not “defective brains” 

From: Depression Research and Treatment

Depress Res Treat. 2010; 2010: 501782. Published online 2010 Nov 4. doi:  10.1155/2010/501782 PMCID: PMC2989705

Full Article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2989705/

Testing a German Adaption of the Entrapment Scale and Assessing the Relation to Depression

Manuel Trachsel, 1 ,* Tobias Krieger, 2 Paul Gilbert, 3 and Martin Grosse Holtforth 2 :

Abstract

The construct of entrapment is used in evolutionary theory to explain the etiology of depression. The perception of entrapment can emerge when defeated individuals want to escape but are incapable. Studies have shown relationships of entrapment to depression, and suicidal tendencies. The aim of this study was a psychometric evaluation and validation of the Entrapment Scale in German (ES-D). 540 normal subjects completed the ES-D along with other measures of depressive symptoms, hopelessness, and distress. Good reliability and validity of the ES-D was demonstrated. Further, whereas entrapment originally has been regarded as a two-dimensional construct, our analyses supported a single-factor model. Entrapment explained variance in depressive symptoms beyond that explained by stress and hopelessness supporting the relevance of the construct for depression research. These findings are discussed with regard to their theoretical implications as well as to the future use of the entrapment scale in clinical research and practice.

Being outnumbered by social humans, 99% to 1%, is de facto defeat and captivity

1. Introduction

Assuming a certain degree of adaptivity of behavior and emotion, evolutionary theorists have suggested various functions of moodiness and depression. Whereas adaptive mechanisms may become functionally maladaptive [1, 2], there have been many attempts to explain potentially adaptive functions of depression. For example, Price [3] suggested that depression evolved from the strategic importance of having a de-escalating or losing strategy. Social rank theory [4, 5] built on this and suggests that some aspects of depression, such as mood and drive variations, may have evolved as mechanisms for regulating behavior in contexts of conflicts and competition for resources and mates. Hence, subordinates are sensitive to down rank threats and are less confident than dominants, while those who are defeated will seek to avoid those who defeated them. Depression may also serve the function to help individuals disengage from unattainable goals and deal with losses [6]. 

Social rank theory (e.g., [4]) links defeat states to depression. Drawing on Dixon’s arrested defences model of mood variation [7, 8], this theory suggests that especially when stresses associated with social defeats and social threats arise, individuals are automatically orientated to fight, flight or both. Usually, either of those defensive behaviors will work. So, flight and escape remove the individual from the conditions in which stress is arising (e.g., threats from a dominant), or anger/aggression curtails the threat. These defensive behaviors typically work for nonhuman animals. However, for humans, such basic fight and flight strategies may be less effective facing the relatively novel problems of living in modern societies, perhaps explaining the prevalence of disorders such as depression [8]. Dixon suggested that in depression, defensive behaviors can be highly aroused but also blocked and arrested and in this situation depression ensues. Dixon et al. [8] called this arrested flight. For example, in lizards, being defeated but able to escape has proven to be less problematic than being defeated and being trapped. Those who are in caged conditions, where escape is impossible, are at risk of depression and even death [9]. Gilbert [4, 10] and Gilbert and Allan [5] noted that depressed individuals commonly verbalize strong escape wishes and that feelings of entrapment and desires to escape have also been strongly linked to suicide, according to O’Connor [11]. In addition they may also have strong feelings of anger or resentment that they find difficult to express or become frightening to them. (Or are NOT ALLOWED to express, without being punished) 

Gilbert [4] and Gilbert and Allan [5] proposed that a variety of situations (not just interpersonal conflicts) that produce feeling of defeat, or uncontrollable stress, which stimulate strong escape desires but also makes it impossible for an individual to escape, lead the individual to a perception of entrapment. They defined entrapment as a desire to escape from the current situation in combination with the perception that all possibilities to overcome a given situation are blocked. Thus, theoretically entrapment follows defeat if the individual is not able to escape. This inability may be due to a dominant subject who does not offer propitiatory gestures following antagonistic competition, or if the individual keeps being attacked. (Relentless social bullying) 

In contrast to individuals who feel helpless (cf. the concept of learned helplessness [12]), which focus on perceptions of control, the entrapped model focuses on the outputs of the threat system emanating from areas such as the amygdala [13]. In addition, depressed people are still highly motivated and would like to change their situation or mood state. It was also argued that, unlike helplessness, entrapment takes into account the social forces that lead to depressive symptoms, which is important for group-living species with dominance hierarchies such as human beings [14]. Empirical findings by Holden and Fekken [15] support this assumption. Gilbert [4] argued that the construct of entrapment may explain the etiology of depression better than learned helplessness, because according to the theory of learned helplessness, helpless individuals have already lost their flight motivation whereas entrapped individuals have not.

According to Gilbert [4], the perception of entrapment can be triggered, increased, and maintained by external factors but also internal processes such as intrusive, unwanted thoughts and ruminations can play an important role (e.g., [16, 17]). For example, ruminating on the sense of defeat or inferiority may act as an internal signal of down-rank attack that makes an individual feel increasingly inferior and defeated. Such rumination may occur despite the fact that an individual successfully escaped from an entrapping external situation because of feelings of failure, which may cause a feeling of internal entrapment. For example, Sturman and Mongrain [18] found that internal entrapment increased following an athletic defeat. Moreover, thoughts and feelings like “internal dominants” in self-critics may exist that can also activate defensive behaviors.

For the empirical assessment of entrapment, Gilbert and Allan [5] developed the self-report Entrapment Scale (ES) and demonstrated its reliability. Using the ES, several studies have shown that the perception of entrapment is strongly related to low mood, anhedonia, and depression [5, 1921]. Sturman and Mongrain [22] found that entrapment was a significant predictor of recurrence of major depression. Further, Allan and Gilbert [23] found that entrapment relates to increased feelings of anger and to a lower expression of these feelings. In a study by Martin et al. [24], the perception of entrapment was associated with feelings of shame, but not with feelings of guilt. Investigating the temporal connection between depression and entrapment, Goldstein and Willner [25, 26] concluded that the relation between depression and entrapment is equivocal and might be bilateral; that is, entrapment may lead to depression and vice versa.

Entrapment was further used as a construct explaining suicidal tendency. In their cry-of pain-model, Williams and Pollock [27, 28] argued that suicidal behavior should be seen as a cry of pain rather than as a cry for help. Consistent with the concept of arrested flight, they proposed that suicidal behavior is reactive. In their model, the response (the cry) to a situation is supposed to have the following three components: defeat, no escape potential, and no rescue. O’Connor [11] provided empirical support in a case control study by comparing suicidal patients and matched hospital controls on measures of affect, stress, and posttraumatic stress. The authors hypothesized that the copresence of all three cry-of-pain variables primes an individual for suicidal behavior. The suicidal patients, with respect to a recent stressful event, reported significantly higher levels of defeat, lower levels of escape potential, and lower levels of rescue than the controls. Furthermore, Rasmussen et al. [21] showed that entrapment strongly mediated the relationship between defeat and suicidal ideation in a sample of first-time and repeated self-harming patients. Nevertheless, there has also been some criticism of the concept of entrapment as it is derived from animal literature [29].

To our knowledge so far, there is no data on the retest reliability or the temporal stability of the Entrapment Scale. Because entrapment is seen as a state-like rather than a trait-like construct, its stability is likely dependent on the stability of its causes. (Remove the social terrorism, or remove yourself) Therefore, if the causes of entrapment are stable (e.g., a long-lasting abusive relationship), then also entrapment will remain stable over time. In contrast, for the Beck Hopelessness Scale (BHS), there are studies assessing temporal stability that have yielded stable trait-like components of hopelessness [30]. Young and coworkers [30] stated that the high stability of hopelessness is a crucial predictor of depressive relapses and suicidal attempts. For the Perceived Stress Questionnaire (PSQ), there are studies examining retest reliability. The PSQ has shown high retest reliability over 13 days (r = .80) in a Spanish sample [31]. It is to be expected that with longer retest intervals as in the present study (3 months), the stability of perceived stress will be substantially lower. We, therefore, expect the stability of entrapment to be higher than that of perceived stress as a state-like construct, but lower than that of hopelessness, which has been shown to be more trait-like [32].

Previous research is equivocal regarding the dimensionality of the entrapment construct. Internal and external entrapment were originally conceived as two separate constructs (cf. [5]) and were widely assessed using two subscales measuring entrapment caused by situations and other people (e.g., “I feel trapped by other people”) or by one’s own limitations (e.g., “I want to get away from myself”). The scores of the two subscales were averaged to result in a total entrapment score in many studies. However as Taylor et al. [33] have shown, entrapment may be best conceptualized as a unidimensional construct. This reasoning is supported by the observation that some of the items of the ES cannot easily be classified either as internal or external entrapment and because the corresponding subscales lack face validity (e.g., “I am in a situation I feel trapped in” or “I can see no way out of my current situation”).

5. Discussion

The entrapment construct embeds depressiveness theoretically into an evolutionary context. The situation of arrested flight or blocked escape, in which a defeated individual is incapable of escaping despite a maintained motivation to escape, may lead to the perception of entrapment in affected individuals [8]. In this study, the Entrapment Scale (ES) was translated to German (ES-D), tested psychometrically, and validated by associations with other measures. This study provides evidence that the ES-D is a reliable self-report measure of entrapment demonstrating high internal consistency. The study also shows that the ES-D is a valid measure that relates to other similar constructs like hopelessness, depressive symptoms or perceived stress. Levels of entrapment as measured with the ES-D were associated with depressiveness, perceived stress, and hopelessness, showing moderate to high correlations. Results were consistent with those obtained by Gilbert and Allan [5]. Entrapment explained additional variance in depressiveness beyond that explained by stress and hopelessness. Taken together, the present data support the conception of entrapment as a relevant and distinct construct in the explanation of depression. (And much of Asperger behavior)

The results of our study confirm the findings of Taylor et al. [33], thereby showing that entrapment is only theoretically, but not empirically, separable into internal and external sources of entrapment. The authors even went further by showing that entrapment and defeat could represent a single construct. Although in this study the defeat scale [5] was not included, the results are in line with the assumption of Taylor et al. [33] and support other studies using entrapment a priori as a single construct. However, although this study supports the general idea that escape motivation affects both internal and external events and depression, clinically it can be very important to distinguish between them. For example, in studies of psychosis entrapment can be very focused on internal stimuli, particularly voices [47].

The state conceptualization of entrapment implies that the perception of entrapment may change over time. Therefore, we did not expect retest correlations as high as retest correlations for more trait-like constructs like hopelessness [32]. Since the correlation over time is generally a function of both the reliability of the measure and the stability of the construct, high reliability is a necessary condition for high stability [48]. In this study, we showed that the ES-D is a reliable scale, and we considered retest correlations as an indicator for stability. The intraclass correlation of .67 suggests that entrapment is more sensitive to change than hopelessness (r = .82). Furthermore, the state of entrapment seems to be more stable than perceived stress, which may be influenced to a greater extent by external factors. Given the confirmed reliability and validity of the ES-D in this study, we therefore cautiously conclude that entrapment lies between hopelessness and perceived stress regarding stability.

Whereas the high correlation between entrapment and depressive symptoms in this study may be interpreted as evidence of conceptual equivalence, an examination of the item wordings of two scales clearly suggest that these questionnaires assess distinct constructs. However, the causal direction of this bivariate relation is not clear. Theoretically, both directions are plausible. Entrapment may be a cause or a consequence of depressive symptoms, or even both. Unfortunately, studies examining the temporal precedence so far have yielded equivocal results and have methodological shortcomings (e.g., no clinical samples, only mild and transitory depression and entrapment scores with musical mood induction) in order to answer this question conclusively [25, 26]. It remains unclear whether entrapment only is depression specific. Entrapment might not only be associated with depression, but also with other psychological symptoms, or even psychopathology in general. This interpretation is supported by research showing a relation between distress arising from voices and entrapment in psychotic patients [49, 50]. Furthermore, other studies show the relation between entrapment and depressive symptoms [5153] and social anxiety and shame [54] in psychosis. The usefulness of entrapment as a construct for explaining psychopathologies in humans has been questioned [29]. Due to the present study, it is now possible to investigate entrapment in psychopathology in the German speaking area.

Modern social humans and the social hierarchy: Driving Asperger types crazy for thousands of years!

 

Does Self-Awareness Require a Complex Brain?

Aye, yai, yai. Here we go again…which definitions of consciousness and self-awareness are being discussed?

(SciAm Article after sample definitions) NOTE: The media function on my page is screwed up… can’t size or delete some images – you’ll have to search out “brain parts images” for yourself. 

From: Home » Positive Psychology Articles » What is Self-Awareness and Why Does it Matter? 

So What is Self-Awareness Exactly? / The psychological study of self-awareness can be first traced back to 1972 when Psychologists Shelley Duval and Robert Wicklund developed the theory of self-awareness.

They proposed that: “when we focus our attention on ourselves, we evaluate and compare our current behavior to our internal standards and values. We become self-conscious as objective evaluators of ourselves.”

In essence, they consider self-awareness as a major mechanism of self-control.

Sounds pretty good; a state of “owning” one’s thoughts and intentions and the recognition that one’s behavior is often not congruent with these “values”. NOT the simple act of “mirror recognition” which belongs to the brain’s “visual system”. 

Basic physical def: When you are awake and aware of your surroundings, that’s consciousness. (That “jives with” mirror recognition -type awareness as a property of an active sensory system). 

The most influential modern physical theories of consciousness (there are supernatural theories, of course) are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Consciousness – Wikipedia

It’s impossible here to present the long-standing and ever-growing confusion over the modern “concepts” of consciousness. It’s a word that is used for the most part, without any meaning whatsoever. Technology also has entered the arena. 

My own idea is this… What we commonly refer to as “being consciousness” is a social interaction, an act of Co-consciousness; the product of language : “In Western cultures verbal language is inseparable from the process of creating a conscious human being.” see previous post: https://aspergerhuman.wordpress.com/?p=9198&preview=true

______________________________________________________________________________________

Article: https://blogs.scientificamerican.com/brainwaves/does-self-awareness-require-a-complex-brain/

By Ferris Jabr on August 22, 2012

The computer, smartphone or other electronic device on which you are reading this article has a rudimentary brain—kind of.* (uh-oh. Pop-Sci) It has highly organized electrical circuits that store information and behave in specific, predictable ways, just like the interconnected cells in your brain. (No) On the most fundamental level, electrical circuits and neurons are made of the same stuff—atoms and their constituent elementary particles—but whereas the human brain is conscious, manmade gadgets do not know they exist. (WOW! NT nonsense!) Consciousness, most scientists argue, (made up assertion) is not a universal property of all matter in the universe. Rather, consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. (Confused yet?) (Mirror awareness is a VISUAL phenomenon)

A brain as complex as the human brain is definitely not necessary for consciousness. (!!!)

On July 7 this year, a group of neuroscientists convening at Cambridge University signed a document officially declaring that non-human animals, “including all mammals and birds, and many other creatures, including octopuses” are conscious. (Well, that’s certainly proof that some poorly-defined experiential state in humans is a “thingy” also “in mammals and birds, and many other creatures, including octopuses” !!)

Humans are more than just conscious—they are also self-aware. Scientists differ on the difference between consciousness and self-awareness, (those imaginary Science Elves again, messing us up with “tricky” non specific definitions of “consciousness and self-awareness”) but here is one common explanation: Consciousness is awareness of one’s body and one’s environment; self-awareness is recognition of that consciousness—not only understanding that one exists, but further understanding that one is aware of one’s existence. Another way of thinking about it: To be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts. Presumably, human infants are conscious—they perceive and respond to people and things around them—but they are not yet self-aware. In their first years of life, infants develop a sense of self, learn to recognize themselves in the mirror (a phenomenon of the SENSORY SYSTEM) and to distinguish their own point of view from other people’s perspectives.

Notice how a lack of distinction / definition of terms leads to the inevitable “linear-causal-but-hierarchical arrangement of “notions” assumed to be correct (that is, how the brain works as an “isolated” command center, but which are “phrases” merely strung together by “social habit”.

Numerous neuroimaging studies have suggested that thinking about ourselves, recognizing images of ourselves and reflecting on our thoughts and feelings—that is, different forms self-awareness—all involve the cerebral cortex, the outermost, intricately wrinkled part of the brain. The fact that humans have a particularly large and wrinkly cerebral cortex relative to body size supposedly explains why we seem to be more self-aware than most other animals. (This pop-sci blah, blah is unforgivable in a “science” article. 

One would expect, then, that a man missing huge portions of his cerebral cortex would lose at least some of his self-awareness. Patient R, also known as Roger, defies that expectation. Roger is a 57-year-old man who suffered extensive brain damage in 1980 after a severe bout of herpes simplex encephalitis—inflammation of the brain caused by the herpes virus. The disease destroyed most of Roger’s insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex (mPFC), all brain regions thought to be essential for self-awareness. About 10 percent of his insula remains and only one percent of his ACC.

Note that “self-awareness” in this article is the “you are awake and aware of your surroundings” definition, and not the Duval, Wickland definition.

Roger cannot remember much of what happened to him between 1970 and 1980 and he has great difficulty forming new memories. He cannot taste or smell either. But he still knows who he is—he has a sense of self. He recognizes himself in the mirror and in photographs. (This would indicate that his VISUAL system / memory is intact) To most people, Roger seems like a relatively typical man who does not act out of the ordinary. (That’s NTs for you; minimal evidence, inattentional blindness, social convention = “must be a normal person”) LOL

Carissa Philippi and David Rudrauf of the University of Iowa and their colleagues investigated the extent of Roger’s self-awareness in a series of tests. In a mirror recognition task, for example, a researcher pretended to brush something off of Roger’s nose with a tissue that concealed black eye shadow. 15 minutes later, the researcher asked Roger to look at himself in the mirror. Roger immediately rubbed away the black smudge on his nose and wondered aloud how it got there in the first place.

Philippi and Rudrauf also showed Roger photographs of himself, of people he knew and of strangers. He almost always recognized himself and never mistook someone else for himself, but he sometimes had difficulty recognizing a photo of his face when it appeared by itself on a black background, absent of hair and clothing. (Visual system)

Roger also distinguished the sensation of tickling himself from the feeling of someone else tickling him and consistently found the latter more stimulating. When one researcher asked for permission to tickler Roger’s armpits, he replied, “Got a towel?” As Philippi and Rudrauf note, Roger’s quick wit indicates that in addition to maintaining a sense of self, he adopts the perspective of others—a talent known as theory of mind. (Hmmm… a man without an insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex is capable of “mind-reading” and subtle social thinking and interaction. BUT, ASD Asperger people who have these “parts” intact, are not capable of “mind-reading” and social communication) He anticipated that the researcher would notice his sweaty armpits and used humor to preempt any awkwardness.

Just where is the “mythic social brain” located? In a textbook perhaps?

In another task, Roger had to use a computer mouse to drag a blue box from the center of a computer screen towards a green box in one of the corners of the screen. In some cases, the program gave him complete control over the blue box; in other cases, the program restricted his control. Roger easily discriminated between sessions in which he had full control and times when some other force was at work. In other words, he understood when he was and was not responsible for certain actions. (Aye, yai, yai. What a “stretchy” conclusion!) The results appear online August 22 in PLOS One.

Given the evidence of Roger’s largely intact self-awareness (visual recognition)despite his ravaged brain, Philippi, Rudrauf and their colleagues argue that the insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex (mPFC) cannot by themselves account for conscious recognition of oneself as a thinking being. (Well, congratulations!) Instead, they propose that self-awareness is a far more diffuse cognitive process, relying on many parts of the brain, including regions not located in the cerebral cortex. (Why no recognition of VISUAL processing??)

In their new study, Philippi and Rudrauf point to a fascinating review of children with hydranencephaly—a rare disorder in which fluid-filled sacs replace the brain’s cerebral hemispheres. Children with hydranencphaly are essentially missing every part of their brain except the brainstem and cerebellum and a few other structures. Holding a light near such a child’s head illuminates the skull like a jack-o-lantern. Although many children with hydranencephaly appear relatively normal at birth, they often quickly develop growth problems, seizures and impaired vision. Most die within their first year of life. In some cases, however, children with hydranencephaly live for years or even decades. Such children lack a cerebral cortex—the part of the brain thought to be most important for consciousness and self-awareness—but, as the review paper makes clear, at least some hydranencephalic children give every appearance of genuine consciousness. They respond to people and things in their environment. When someone calls, they perk up. The children smile, laugh and cry. They know the difference between familiar people and strangers. They move themselves towards objects they desire. And they prefer some kinds of music over others. If some children with hydranencephaly are conscious, then the brain does not require an intact cerebral cortex to produce consciousness. (Which “consciousness” are we discussing?)

Hydranencephaly: “conscious” by definition “awake and aware of its surroundings” – there seems to be a consistent error in equating this definition (which is true of any animal that is not “asleep, dormant, anesthetized, or comatose” and includes automatic reflexes) and being aware that one is aware, or self-awareness). 

Whether such children are truly self-aware, however, is more difficult to answer, especially as they cannot communicate with language. In D. Alan Shewmon‘s review, one child showed intense fascination with his reflection in a mirror (visual system), but it’s not clear whether he recognized his reflection as his own. Still, research on hydranencephaly and Roger’s case study indicate that self-awareness—this ostensibly sophisticated and unique cognitive process layered upon consciousness—might be more universal than we realized. (Totally ridiculous statement. Mixing simple visual recognition with Duval, Wickland definition. Still no clue as to what “consciousness” is. 

References

Merker B (2007) Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Behavioral and Brain Sciences 30: 63-81.

Philippi C., Feinstein J.S., Khalsa S.S., Damasio A., Tranel D., Landini G., Williford K.5, Rudrauf D. Preserved self-awareness following extensive bilateral brain damage to the insula, anterior cingulate, and medial prefrontal cortices. PLOS ONE. Aug 22.

Shewmon DA, Holmes GL, Byrne PA. Consciousness in congenitally decorticate children: developmental vegetative state as self-fulfilling prophecy. Dev Med Child Neurol. 1999 Jun;41(6):364-74.

How to Exterminate Aboriginal Peoples, cont. / Reproductive Elimination

 

From: National Library of Australia 1866 Photograph "Aborigines of Tasmania: William Lanney, Coal River Tribe, 26 year. Lallah Rookh, or Truganini (Seaweed), female, Bruni (i.e. Bruny) Island Tribe, 65 year.

From: National Library of Australia 1866 Photograph “Aborigines of Tasmania: William Lanney, Coal River Tribe, 26 year. Lallah Rookh, or Truganini (Seaweed), female, Bruni (i.e. Bruny) Island Tribe, 65 year. Truganini died in the mid-1870s. She is believed to be the last Aboriginal Tasmanian.

The Process of DOMESTICATION:

How to wipe out an indigenous population, subculture, ethnic group, wild population, or wild species.

As is usual among social humans, there is bitter conflict over “who is right” concerning the history of European – Tasmanian conflict. I could not care less about these social battles; arguments are not about the facts, but about who “owns” history. What does interest me is a pattern of “knocking off” small populations of wild humans, by the reduction of females allowed to reproduce with “wild males.” This pattern applies to the collision of peoples and cultures, in which one is powerful, numerous and RUTHLESS and the other is an indigenous population sheltered by geography from contact with human predators for hundreds to thousands of years. This process has been repeated countless times around the globe.

The Pattern:

locationJPG Bassian_Plain

TIMELINE

1790-ish: British and Americans arrive to hunt seals in the Bass Strait.

1800-ish Seal hunters are dropped off on uninhabited islands in Bass Strait, staying November to May, the seal hunting season. Sealers establish camps on islands close to Tasmania and make contact with Aboriginal Tasmanians, who had been isolated from their origins in Australia for 8 – 10,000 years.

Trade develops – Tasmanians want dogs & food items; sealers trade for kangaroo hides. No surprise – a trade in Aboriginal Tasmanian women developed: women were excellent seal and bird hunters. Some women were purchased outright, others were “gifts.” Some sealers raided coastal Aboriginal settlements and abducted women.

Each seal hunter “required” 4-5 women to work for him as hunters. The women were marooned on uninhabited islands, and if not enough seals had been killed by the time the sealer returned, the women were brutally beaten and “stubborn” ones killed. 

By 1810 the seal population was severely reduced and the hunters moved on; some remained with Aboriginal women they had “married.” Children of these mixed European / Aboriginal reproductions survived while native children stopped being conceived.  

Tales are remembered of women who “went rogue,” attacking and killing the sealers because of the brutality they suffered. Female slaves became a force against white authority and fought in conflicts. Women who fought back, resisted enslavement, or proved too difficult to be “domesticated” were eliminated.

After mere decades the number of Aboriginal Tasmanian women declined: it is reported that by 1830 only 3 Aboriginal women remained in northeast Tasmania, along with 72 men. This lack of females made it impossible for Aboriginal Tasmanians to reproduce in enough numbers to survive as a distinct and original population.

Domestication depends on juvenalization (neoteny) – a brutal, but simple, process. Select “tame” females for reproductive use: Childlike traits of obedience, passivity, and easy manipulation and handling are valued by “captors” just as tame traits are valued in the selection of wild animals for domestication. 

The idiotic European belief that dressing up indigenous people in "white" clothing will magically convert them into being "tame" or "domesticated."

The idiotic European belief that dressing up indigenous people in “white” clothing will magically convert them into being “tame” or “domesticated.”

Truganini ca. 1812-1875 Her life and image have been exploited, similar to freaks in a carnival or the head of a trophy animal hung on the wall.

Truganini ca. 1812-1875
Her life and image have been exploited, like a freak in a carnival sideshow or like the head of a trophy animal hung on the wall.

 

___Article by: Aaron Greenville and Paddy Pallin

____________________________________________Will we hunt dingoes to the brink like the Tasmanian tiger?

A dead dingo in 2013 (left) and a Tasmanian tiger, last seen in the wild in 1932. Dingo photography by Aaron Greenville; a hunted thylacine in 1869, photographer unknown.

The last Tasmanian tiger died a lonely death in the Hobart Zoo in 1936, just 59 days after new state laws aimed at protecting it from extinction were passed in parliament. But the warning bells about its likely demise had been pealing for several decades before that protection came too late – and today we’re making many of the same deadly mistakes, only now it’s with dingoes. Earlier this month the Queensland government announced it would make it easier for farmers to put out poison baits for “wild dogs”. In Victoria, similar measures have already been taken. Lethal methods of control have lethal consequences. It is time to rethink our approach in how we manage our wild predators.

______________________________________________

A deadly history lesson

(A familiar impasse to those of us in Wyoming who want the Wolf to retain its natural status as top predator in our state versus cattle and sheep ranchers who demonstrate a pathological fury against the wolf and want it exterminated (again.)

________________________________________________

Commonly known as Tasmanian tigers because of their striped backs, thylacines were hunted due to the species alleged damage they were doing to the sheep industry in the state. However, the thylacine’s actual impact on the industry was likely to have been small.

Instead, the species was made a scapegoat for poor management and the harshness of the Tasmanian environment, as early Europeans struggled implementing foreign farming practises to the new world.

The tiger [thylacine]… received a very bad character in the Assembly yesterday; in fact, there appeared not to be one redeeming point in this animal. It was described as cowardly, as stealing down on the sheep in the night and want only killing many more than it could eat… All sheep owners in the House agreed that “something should be done,” as it was asserted that the tigers have largely increased of late years. – The Mercury, October 1886.

More than a century later, and it’s now the dingo in the firing line.

Since 1990, the number of sheep shorn in Queensland has crashed 92 per cent, from over 21 million to less than 2 million. Although there have been rises and falls in the wool price and droughts have come and gone, it’s the dingoes that have been the last straw.ABC Radio National, May 2013

An ancient predator vs modern farmers

Producing sheep is an incredibly tough business, with droughts, international competition and volatile markets for wool and meat – mostly factors that are well beyond the control of an individual farmer.

Dingoes are seen as one of the few threats to livelihood that producers can fight back against. As a result, the dingo has experienced a severe range contraction since European settlement and there is mounting pressure to remove the dingo from the wild, despite dingoes calling Australia home for 4000 years.

Dingoes are now rare or absent across half of Australia due to intense control measures. While they are more common in other areas, we have seen how species populations can collapse quickly. For example, bounty records from Tasmania showed the thylacine population suddenly crashed in 1904-1910 due to hunting pressure from humans.

Will the dingo’s demise be like that of the thylacine? We simply do not know, but the social conditions and a rapidly changing environment mirror the story of the thylacine.

 

%d bloggers like this: