Overview of personality “theories” + History of “personality concept” / Yikes!

https://www.simplypsychology.org/personality-theories.html

Without any attempt at addressing this enormously complex problem as a whole, it may be worthwhile to recall one recent example of interdisciplinary discussion occurring at the intersection of empirical psychology and normative ethics: a discussion of virtuous character. The latter being a paradigmatic subject matter of virtue ethics, at least from Socrates, has recently been reconsidered in the light of experimental results obtained by academic psychology. More specifically, it was related to the criticism of the concept of personality voiced mostly by social psychologists.

The conceptual and theoretical core of personality psychology, both in its scientific and folk versions (Gilbert & Malone, 1995; Ross, 1977), has usually been constructed around the notion of temporally stable and cross-situationally consistent features: so-called global or robust traits. A recent empirical tradition of situationism, however, seems to provide ample evidence not only for the fact that we are all indeed “prone to dispositionism” of this kind, but also that such “dispositionism is false” (Goldie, 2004, p. 63). The researchers from this tradition deny that there are stable and consistent traits or, alternatively, insist that most actual people don’t exhibit traits of this kind. Rather, the large body of empirical evidence (among the research most commonly discussed by virtue ethicists is that by Darley & Batson, 1973; Isen & Levin, 1972; Milgram, 1963; for a more complete review see Doris, 2002) provided shows that it is a situation in which an agent finds him/herself acting, rather than an allegedly context-independent and stable personality, that accounts for the large amount of human behavior.

The experiments conducted by social psychologists were soon generalized into doubts concerning the usefulness of trait concepts for the purposes of scientific explanation and prediction. Understood in such a context, in turn, they attracted the attention of many philosophers. The empirical results mentioned above could, indeed, have been disquieting, especially if one realized that the very center of traditional philosophical moral psychology, especially within so-called virtue ethics, had been founded on the notion of moral character with virtues and vices aspiring to exactly the same stability and cross-situational consistency that was undermined in the case of personality. Among the philosophers it was especially Gilbert Harman (1999, 2000) and John Doris (1998, 2002) who stimulated a fierce debate by stating that situationist literature posed a grave threat against “globalist moral psychologies” (Doris & Stich, 2014), and as undermining the very basis of both ancient and contemporary virtue ethics.

Such a far-reaching claim, obviously, provoked a strong response (for useful reviews see Alfano, 2013; Appiah, 2008; Goldie, 2004; Miller, 2013a). What seems to have been assumed by at least many disputants from both sides of the controversy, however, was a relatively direct applicability of psychological theses concerning personality to philosophical issues referring to character. In brief, it was the interchangeability of the notions of personality and character that had been presumed. Despite the fact that such an implicit assumption has been often made, these two notions are not identical. True, they are often used interchangeably and the difference is vague, if not obscure. Still, however, the notions in question can be distinguished from each other and the effort to draw the distinction is arguably worthwhile because of the latter’s bearing on many particular issues, including the above discussion of situationism.

One possible way of exploring the difference between these two concepts is to compare the typical, or paradigmatic, ways of their application as revealed in their respective original domains. Common language is obviously not very helpful here, as it exhibits the very same confusion that is intended to be clarified. Rather, the context of classical virtue ethics (for character) as well as that of academic personality psychology (for personality), are promising. Such a general clue will be used in the following sections. At first, the concepts of character and personality will be investigated both historically and systematically. Then, in turn, a parallel will be drawn between the pair in question and so-called fact–value distinction and an analysis of the functions played by both concepts conducted. Finally, the outcomes achieved will be placed in the context of some differences between the fact–value distinction and the Humean is–ought dichotomy.

Historical vicissitudes of the notions

In antiquity the notion of character was inseparably connected with the normative aspect of human conduct and in most contexts amounted to moral qualities of a respective person: to virtues and vices. Such a connection was emphasized in a term alternative to “character”: the Greek word “êthos” (cf. Gill, 1983, p. 472). An evaluative discourse of character can be found in common language and folk psychology (cf. Irwin, 1996), but it is its professional version proper to virtue ethics that is crucial in the present context. The latter philosophical tradition took on its classical form in Socrates and culminated with Aristotle’s (trans. 2000) Nicomachean Ethics, which to this day is a paradigmatic example of a virtue ethical account.

Ancient conceptions of character were descriptive and normative with both these features closely intertwined. They involved theories of moral and developmental psychology and, at the same time, a prescription and a detailed instruction of character education and character self-cultivation. And it was, importantly, a ‘life-long learning’ account that was provided: it was a rational adult, rather than a child deprived of genuine rationality, who was regarded by Cicero, Seneca, or Plutarch as able to accomplish “character formation through reasoned reflection and decision” (Gill, 1983, p. 470). The standards for the success of such a process were usually considered objective. In the Aristotelian context, for instance, it was the ability to properly perform human natural functions that provided the ultimate criterion.

The ancient Greek and Roman concept of character turned out to be profoundly influential in the following ages at least, as has been mentioned, until the beginnings of the previous century (for part of the story, see MacIntyre, 2013). Some of the variations on this ancient notion can be found in the Kantian ideal of the ethical personality, the German tradition of Bildung, the 19th-century American model of the balanced character and, last but not least, the Victorian vision of the virtuous character very vivid in the novels from this cultural milieu (Woolfolk, 2002). What is remarkable is that the notion of character, as influential as it used to be, is considerably much less important today. Nowadays, in fact, it seems to be mostly substituted by the concept of personality. And it is the history of the process that led to this state of affairs, of the shift “from a language of ‘character’ to a language of ‘personality’” (Nicholson, 1998, p. 52) that can be very revealing in the present context. Two particularly helpful accounts have been provided by Danziger (1990, 1997) and Brinkmann (2010).3

Danziger begins his account with an important remark that initially the notion of personality carried the meanings which were not psychological, but were theological, legal, or ethical ones. It was only as a result of considerable evolution that it “ended up as a psychological category.” The first important dimension of the process of its coming “down to earth” (1997, p. 124) was the medicalization. Danziger places the latter in 19th-century France, where medical professionals were as skeptical about the earlier theologically or philosophically laden versions of the notion as they were enthusiastic about the promises of its naturalization. It was as a result of their reconceptualization that “personality” began to refer to “a quasi-medical entity subject to disease, disorder and symptomatology” (1997, p. 131). The term understood as such won its place within medical discourse and soon, in 1885, it became possible for Théodule Ribot to publish The Diseases of the Personality without a risk of conceptual confusion. An evolution began which would later lead to the inclusion of the personality disorders into the DSM (cf. Brinkmann, 2010, p. 73).

Among the descendants of the medicalization it is arguably the mental hygiene movement, “an ideological component” (Danziger, 1990, p. 163) of the rise of contemporary research on personality, that was most important at that time. On the basis of the belief that it is an individual maladjustment rooted in early development that is responsible for all kinds of social and interpersonal problems, “a powerful and well-founded social movement” (p. 164) directed at the therapy of the maladjusted as well as at the preventive efforts addressed to the potentially maladjusted (which could include everybody), was initiated. The notion of personality, as noted by Danziger, “played a central role in the ideology” (p. 164) of this movement. More particularly, it was the “personality” of individuals addressed by the latter which was recognized as “the site where the seeds of future individual and social problems were sown and germinated” (Danziger, 1997, p. 127) and, accordingly, established as an object of intervention.

Personality understood as such needed to be scientifically measured on the dimension of its adaptation/maladaptation and it was at this place that the psychologists from the Galtonian tradition of individual differences and mental testing arrived on the scene. In fact, it could easily seem that no one was better equipped than those researchers to perform the task set by the mental hygiene movement and to provide the latter’s ideology with a technical background. Mental testing confined to cognitive abilities or intelligence at roughly the same time, i.e., after World War I, turned out to be insufficient not only as a means of occupational selection but also for its originally intended application, i.e. as a predictor of school success. In effect, there was an increasing recognition of the need for measurement techniques for non-intellectual mental qualities.

And such techniques were indeed soon developed using the very same methodological assumptions that had been previously applied to cognitive abilities. Paper-and-pencil questionnaires measuring non-cognitive individual differences “began to proliferate” (Danziger, 1990, p. 159). Simultaneously, a new field of psychological investigation, “something that could sail under the flag of science” (p. 163), began to emerge. Only one more thing was lacking and it was a label, a name for the new sub-discipline and its subject matter.

The “shortlisted” candidates included the notions of temperament, character, and personality. The first one was rejected due to its then associations with physiological reductionism. Why not “character,” then? Well, that notion in turn was considered inappropriate due to its association with the concept of will being an “anathema to scientifically minded American psychologists” (Danziger, 1997, p. 126) and generally normative connotations. The third candidate, “personality,” as a result, came to the fore.

Not only was it devoid of an unwelcome moralistic background and already popularized by the mental hygiene movement, it also offered a realistic prospect of quantitative empirical research. Already adopted by scientific medicine and understood along the lines of Ribot as an “associated whole” (un tout de coalition) of the variety of forces, personality, rather than holistic character, was a much more promising object for the post-Galtonian methodology (Danziger, 1997, p. 127; cf. Brinkmann, 2010, p. 74). Soon, the newly emerging field “developed loftier ambitions” (Danziger, 1997, p. 128) and became a well-established part of academic psychology4 with its flagship project of discovering basic, independent, and universal personality-related qualities: the traits. And it is actually this tradition that is more or less continued today, with the Big Five model being a default perspective.

Note: I would add, that moralistic social “tradition” did not disappear from “personality theory” – psychology remains a socio-religious “prescriptive and rigid” conception of human behavior, despite the effort to construct “something that could sail under the flag of science”

For the establishment of personality rather than character as a subject matter of the new psychological science, Gordon W. Allport’s importance can hardly be overestimated (Allport, 1921, 1927; cf. Nicholson, 1998). Following an earlier proposal by John B. Watson Allport drew an explicit distinction between normatively neutral personality, “personality devaluated,” and character as “personality evaluated” (Allport, 1927, p. 285). Personality and character, crucially, were regarded by him as conceptually independent. The former, in particular, could be intelligibly grasped without the reference to the latter: “There are no ‘moral traits’ until trends in personality are evaluated” (p. 285). Accordingly, an evaluation was considered as additional and only accidental. As such it was regarded as both relative and connected with inevitable uncertainty (for the cultural background and metaethical position of emotivism lying behind such an approach see MacIntyre, 2013).5

The point which is crucial here is that the recognition of the normative element of the character concept led to its virtual banishment. While listing “basic requirements in procedures for investigating personality,” Allport (1927, p. 292) was quite explicit to enumerate “the exclusion of objective evaluation (character judgments) from purely psychological method” (p. 292). Those psychologists who accept his perspective “have no right, strictly speaking, to include character study in the province of psychology” (Allport, 1921, p. 443).6

The transition from the notion of character to that of personality was a very complex process which reflected some substantial changes in cultural and social milieu. Some insightful remarks about the nature of the latter have been provided by Brinkmann’s (2010) account of the shift between the premodern “culture of character” and essentially modern “culture of personality.”This shift, importantly, was not only a “linguistic trifle.” Rather, it was strictly associated with “the development of a new kind of American self” (Nicholson, 1998, p. 52).

A culture of character, to begin with, was essentially connected with moral and religious perspectives, which provided the specification of human télos. And it was in relation to the latter that the pursuit of moral character was placed. In the paradigmatic Aristotelian account, for instance, the notion of the virtuous character was essentially functional in the same way in which the concept of a good watch is (MacIntyre, 2013). The criteria of success and failure, accordingly, were defined in terms of one’s ability to perform the natural functions of the human creature. And the latter were not “something for individuals to subjectively decide” (Brinkmann, 2010, p. 70). Rather, they were predetermined by a broader cosmic order of naturalistic or theological bent.

The goal of adjusting one’s character to suit the requirements of human nature was institutionalized in social practices of moral education and character formation. According to Brinkmann, it was especially moral treatment or moral therapy that embodied the default approach “to the formation and correction of human subjects” (2010, p. 71). This endeavor was subsequently carried on in the very same spirit, though in an essentially different cultural milieu, by William Tuke and Phillipe Pinel and it was no earlier than with Sigmund Freud that a new form of therapy, properly modern and deprived of an explicit normative background, emerged.

Note: And yet, in American psychology, it is precisely this “imaginary normal” that continues to be the default assumption against which pathology and defect are  assigned.

The ancient virtue ethical approach embodied in a culture of character was taken over by the Middle Ages, with an emphasis shifted considerably towards theological accounts of human goals. A thoroughly new perspective proper to a culture of personality appeared much later with the emergence of the scientific revolution, which seriously undermined the belief in objective normative order. The earlier cosmic frameworks started to be supplanted by psychological perspectives with romanticism and modernism being, according to Brinkmann (2010, p. 72), two forces behind them.

One of the main running threads of romanticism is the idea that “each human being has a unique personality that must be expressed as fully as possible” (Brinkmann, 2010, p. 73). Before romanticism, the final purpose had been specified in terms external to a particular individual. It was related to generic norms of humanity as such or to those determined by God. (Today, “generic norms” are determined by a “new” God: the psych industry) Now the goal to be pursued started to be understood as properly individual and unique

Note: I don’t think that Americans understand how pervasive “the shift” away from the individual being “a unique personality that must be expressed as fully as possible” to a totalitarian demand for conformity as dictated by a “new religious” tide of psycho-social tyranny was accomplished in a few decades. It is not surprising that Liberalism is every bit as religious as the Christian Right in its goal to “restore” the extreme religious aims (and hatred of humanity) of Colonial America; a continuation of the religious wars that raged in Europe for centuries.  

This difference is evident when one compares Augustine’s and Rousseau’s confessional writings. The former “tells the story of a man’s journey towards God,” whereas the latter “is about a man’s journey towards himself, towards an expression of his own personality” (Brinkmann, 2010, p. 73). (Not allowed anymore!)

The demand for the “journey towards himself” can be connected with a disenchantment of the world, which had left an individual in a universe devoid of meaning and value. If not discovered in the world, the latter needed to be invented by humans. One had to “turn inwards” in order to find the purpose of life and this entailed, significantly, the rejection of external and social forces as potentially corrupting the genuine inborn self. The idea of “an individual in relative isolation from larger social and cosmological contexts” began to prosper and it “paved the way for the modern preoccupation with personality” (Brinkmann, 2010, pp. 67, 73) defined in fully atomistic or non-relational terms.

The second major force behind a culture of personality was modernism, which, in alliance with the modern idea of science, entailed an “ambition of knowing, measuring [emphasis added], and possibly improving [emphasis added] the properties of individuals” (Brinkmann, 2010, p. 73), which proved to have a considerable bearing on the newly emerging notion of personality. The latter concept had been deeply influenced by the logic of standardization and quantification characteristic of the whole of modernity; not only of its industry, but also of education, bureaucracy, and the prevailing ways of thinking. This logic found its direct counterpart in trait-based thinking about personality with the idea that the latter can “be measured with reference to fixed parameters” and that individuals “vary within the parameters, but the units of measurement are universal” (Brinkmann, 2010, p. 75). (This assumption that “opinions that arise from a social agenda” can be quantified is disastrous.)

The romantic and modernist branches of a culture of personality, with all the differences between what they laid emphasis on, were connected by the common atomistic account of the self and a plea for the development of unique qualities of the individual. And it is this “core element” of their influence which is still in place today,8 even though some authors, Brinkmann included, have announced the appearance of a new cultural formation, a culture of identity.

The character–personality distinction

The relationship between two notions in question can be elucidated by, first, indicating their common features (genus proximum) and, then, by specifying the ways in which they differ from each other (differentia specifica). As far as the former is concerned, both “character” and “personality” can be regarded as constructs belonging to the discourse of individual differences.9 Both notions are analyzable, even if not reductively analyzable, in terms of some lower-level terms such as virtues and vices or, respectively, traits. These lower-lever concepts are usually understood as dispositional. A personality trait, for instance, can be defined as a “disposition to form beliefs and/or desires of a certain sort and (in many cases) to act in a certain way, when in conditions relevant to that disposition” (Miller, 2013a, p. 6). The higher level notions of character and personality, accordingly, are also dispositional.

The formal features indicated above are common to the notions of character and personality.10 And it is on the basis of this “common denominator” that one can attempt to clarify the difference between them. A good place to begin with is a brief remark made by Goldie (2004) who claimed that “character traits are, in some sense, deeper than personality traits, and … are concerned with a person’s moral worth” (p. 27). It is a dimension of depth and morality, then, which can provide one with a useful clue. (Note that both “traits” and moral rules are subjective, culturally defined and NOT quantifiable objects: that is, this remains a religious discussion.) 

As far as the depth of the notion of character is concerned, the concept of personality is often associated with a considerable superficiality and the shallowness of mere appearances (Goldie, 2004, pp. 4–5; Kristjansson, 2010, p. 27). The fact that people care about character, accordingly, is often connected with their attempt to go beyond the “surface,” beyond “the mask or veneer of mere personality” (Goldie, 2004, p. 50; cf. Gaita, 1998, pp. 101–102).11 Even the very etymology of the term “personality” suggests superficiality by its relation to the Latin concept of persona: “a mask of the kind that used to be worn by actors.” Character as deeper “emerges when the mask is removed” (Goldie, 2004, p. 13; cf. the Jungian meaning of persona).

The reference to the depth of character, as helpful as it may be, is certainly insufficient due to its purely formal nature. What still remains to be determined, is a substantive issue of the dimension on which character is deeper than personality. As far as Goldie’s distinction is concerned such a specification is provided in what follows: “someone’s personality traits are only good [emphasis added] conditionally upon that person also having good character traits … On the other hand, the converse isn’t true: the goodness [emphasis added] of someone’s character trait is not good [emphasis added] conditionally on his having good personality traits” (2004, p. 32). It is depth referring to ethical dimension, then, which distinguishes between character and personality.12 One’s virtue of honesty, for instance, can still be valued even if the person in question is extremely shy (introvert, as the psychologist would say). (Both introversion and “honesty” are labeled symptoms of “developmental disorder” in the ASD / Asperger diagnosis)

It does not work the other way around, though. An outgoing and charming personality, when connected with considerably bad character, is in a sense polluted. A criminal who is charming can be even more dangerous, because he/she can use the charm for wicked purposes.13 Such a difference, importantly, should not be taken as implying that personality cannot be evaluated at all. It can, with a reservation that such an evaluation will be made in terms of non-moral criteria or preferences. An extraverted person, for instance, can still be considered as a “better” or more preferable candidate for the position of talk show host (cf. Goldie, 2004, p. 47; McKinnon, 1999, pp. 61–62).

The above-given specification of the distinction can be enriched by some remarks by Gill (1983, p. 470), who notices that “character” and “personality” are not only distinguishable as two concepts but also as “two perspectives on human psychology” for which they are, respectively, central. The character-viewpoint, to begin with, “presents the world as one of … performers of deliberate actions” (Gill, 1986, p. 271). Human individuals, in particular, are considered as more or less rational and internally consistent moral agents possessing stable dispositions (virtues and vices) and performing actions which are susceptible to moral evaluation and responsibility ascription. The evaluation of their acts, importantly, is believed to be objective: to be made along the lines of some definite “human or divine standards” (p. 271). No “special account,” accordingly, is taken “of the particular point of view or perspective of the individuals concerned” (Gill, 1990, p. 4).

The personality-viewpoint, on the other hand, is not associated with any explicitly normative framework. Rather, it is colored by “the sense that we see things ‘as they really are’ … and people, as they really are” (Gill, 1986, p. 271). The purposes are psychological, rather than evaluative: to understand, empathize with, or to explain. Also the default view of the individuals in question is considerably shifted. Their personality is recognized as being “of interest in its own right” (Gill, 1983, p. 472) and their agency as considerably weakened: “The person is not typically regarded as a self-determining agent,” but rather as a “relatively passive” (p. 471) individual often at the mercy of the forces acting beyond conscious choice and intention. The unpredictability and irrationality entailed by such a view is substantial.

To sum up the points made above, it may be said that while both “character” and “personality” belong to the discourse of individual differences, only the former is involved in the normative discourse of person’s moral worth and responsibility. The thesis that the notion of character, but not that of personality, belongs to the discourse of responsibility should be taken here as conceptual. What is claimed, in particular, is that linguistic schemes involving the former notion usually involve the notion of responsibility as well and allow us to meaningfully hold somebody responsible for his/her character. Language games involving both concepts, in other words, make it a permissible, and actually quite a common, “move” to be made. Whether and, if yes, under what circumstances such a “move” is metaphysically and ethically justified is a logically separate issue, which won’t be addressed here.

In those accounts in which the connection between character and responsibility is considered stronger, i.e., as making responsibility claims not only conceptually possible but also justified, a separate account of responsibility is needed (e.g., Miller, 2013a, p. 13). One possible ground on which such an account can be developed is the relationship between character and reasons (as opposed to mere causes). Goldie (2004), for instance, emphasizes the reason-responsiveness of character traits: the fact that they are dispositions “to respond to certain kind of reasons” (p. 43). Actually, he even defines a virtue as “a trait that is reliably responsive to good reasons, to reasons that reveal values” (p. 43, emphasis removed; cf. the definition by Miller, 2013b, p. 24). A vice, accordingly, would be a disposition responsive to bad reasons.

Whether all personality traits are devoid of reason-responsiveness is not altogether clear (cf. Goldie, 2004, p. 13). For the notion of personality proper to academic psychology the answer would probably depend on a particular theoretical model employed. There would be a substantial difference, for instance, between personality understood, along the behavioristic lines, as a disposition to behavior and more full-fledged accounts allowing emotional and, especially, cognitive dispositions. What seems to be clear is the importance of reason-responsiveness for character traits.

The fact–value distinction is usually derived from some remarks in David Hume’s (1738/2014, p. 302) Treatise of Human Nature, in which the idea of the logical distinctiveness of the language of description (is) and the one of evaluation (ought) was expressed. A relatively concise passage by Hume soon became very influential and gave birth not only to a distinction, but actually to a strict dichotomy between facts and values (cf. Putnam, 2002). A methodological prescription “that no valid argument can move from entirely factual premises to any moral or evaluative conclusion” (MacIntyre, 2013, p. 67) was its direct consequence.

In order to refer the above dichotomy to the notions of character and personality, it may be helpful to remember Allport’s (1921) idea of character being “the personality evaluated according to prevailing standards of conduct” (p. 443). A crucial point to be made here is that the act of evaluation is considered as an addition of a new element to an earlier phenomenon of personality, which can be comprehended without any reference to normativeness. The latter notion, in other words, is itself morally neutral: “There are no ‘moral traits’ until trends in personality are evaluated” (Allport, 1927, p. 285).

The thesis that personality can be specified independently of character or more generally, without any application of normative terms, is of considerable importance because it illustrates the fact that the character–personality distinction logically implies the fact–value one. The validity and the strictness of the former, in consequence, rely on the same features of the latter. Character and personality, in brief, can be separated only as long as it is possible to isolate personality-related facts from character-related values.

Such dependence should necessarily be referred to contemporary criticism of the fact–value distinction (e.g., MacIntyre, 2013; Putnam, 2002; cf. Brinkmann, 2005, 2009; Davydova & Sharrock, 2003). This criticism has been voiced from different perspectives and involves at least several logically distinct claims. For the present purposes, however, it is an argument appealing to so-called thick ethical concepts14 and the fact–value entanglement that is of most direct significance.

The distinction between thick and thin ethical concepts was first introduced (in writing) by Bernard Williams (1985/2006)15 and subsequently subjected to intense discussion (for useful introductions see Kirchin, 2013; Roberts, 2013; applications for moral psychology can be found in Fitzgerald & Goldie, 2012). What is common to both kinds of concepts is that they are evaluative: they “indicate some pro or con evaluation” (Kirchin, 2013, p. 5). Thick concepts, furthermore, are supposed to provide some information about the object to which they refer (information, which thin concepts do not provide). They have, in other words, “both evaluative conceptual content … and descriptive conceptual content … are both evaluative and descriptive” (Kirchin, 2013, pp. 1–2). If I inform somebody, for instance, that person A is good and person B is courageous, it is obvious that my evaluation of both A and B is positive. At the same time, however, the person informed doesn’t seem to know much about a good (thin concept) person A, whereas he/she knows quite a bit about a courageous (thick concept) person B.

The significance of thick concepts for philosophical discussion is usually connected with some “various distinctive powers” they supposedly possess. More specifically, when they are interpreted along the lines of a so-called non-reductive view they seem to have “the power to undermine the distinction between fact and value” (Roberts, 2013, p. 677).16 The non-reductive position is usually introduced as a criticism of the reductive idea that thick concepts “can be split into separable and independently intelligible elements” (Kirchin, 2013, p. 8; cf. the idea of dividing character into two parts mentioned above) or, more specifically, explained away as a combination of (supposedly pure) description and thin evaluation. If such a reduction was successful thick concepts would turn out to be derivative and lacking philosophical importance.

Many authors, however, including notably Williams (1985/2006), McDowell (1981), and Putnam (2002), claim that no such reductive analysis can be conducted due to the fact–value entanglement characteristic of thick concepts. The latter, as is argued, are not only simultaneously descriptive and evaluative, but also “seem to express a union of fact and value” (Williams, 1985/2006, p. 129). The fact–value entanglement proper to thick concepts becomes apparent if one realizes that any attempt to provide a set of purely descriptive rules governing their application seems to be a hopeless endeavor. One cannot, for instance, develop a list of necessary and jointly sufficient factual criteria of cruelty.17 It is obviously possible “to describe the pure physical movements of a torturer without including the moral qualities” (Brinkmann, 2005, p. 759), but it would yield a specification which comes dangerously close to the description of some, especially unsuccessful, surgical operations. In order to convey the meaning of the word “cruelty” (and to differentiate it from the phrase “pain-inflicting”) one needs to refer to values and reasons (rather than facts and causes only). An evaluative perspective from which particular actions are recognized as cruel, accordingly, must be at least imaginatively taken in order to grasp the rationale for applying the term in some cases, but not in others. Communication using thick concepts, as a result, turns out to be value-laden through and through.

The above-given features assigned to thick concepts by the non-reductionists are crucial due to the fact that they cannot be accounted for within the framework of the fact–value distinction. As such they are often believed to “wreak havoc” (Roberts, 2013, p. 678) with the latter or, more precisely, to undermine “the whole idea of an omnipresent and all-important gulf between value judgments and so-called statements of fact” (Putnam, 2002, p. 8).

The undermining of the sharp and universal dichotomy between facts and values has a very direct bearing on the character–personality distinction being, as emphasized above, dependent on the former. A crucial point has been made by Brinkmann who noticed that almost “all our words used to describe human action are thick ethical concepts” (2005, p. 759; cf. Fitzgerald & Goldie, 2012, p. 220). And the same applies to the language of character which, contrary to Allport’s expectations, cannot be neatly separated into the factual core of personality and the normative addition. The distinction between the notions of character and personality, in consequence, even though often applicable and helpful, cannot be inflated into a sharp dichotomy.

Having analyzed the reliance of the character–personality distinction on the dichotomy between, respectively, value and fact, it becomes possible to carry out the second detailed investigation devoted to the functions played by the two concepts scrutinized. A good starting point for this exploration may be a remark made by Goldie (2004) who, while discussing the omnipresence of the discourse of personality and character, noticed that it is “everywhere largely because it serves a purpose: or rather, because it serves several purposes [emphasis added]” (p. 3). These functions merit some closer attention because they can help to further specify the difference between the concepts investigated.

The purposes served by the discourse of individual differences have been briefly summarized by the abovementioned author when he said that we use it “to describe people, to judge them, to enable us to predict what they will think, feel and do, and to enable us to explain their thoughts, feelings and actions” (and to control, manipulate and abuse them) (Goldie, 2004, pp. 3–4; cf. an analogous list provided by Miller, 2013b, pp. 12–13). Some of these functions are common to the notions of character and personality. Some others, however, are proper to the concept of character only.

The first of the common functions is description. The language of character and personality can serve as a kind of shorthand for the longer accounts of the actions taken. When asked about the performance of a new employee, for instance, a shift manager can say that he/she is certainly efficient and hard-working (rather than mention all particular tasks that have been handled). Similarly, if we say that A is neurotic, B is extraverted, C is just, and D is cruel, we do convey some non-trivial information about A, B, C, and D, respectively (even though our utterances may include something more than a mere description).

The second of the purposes that can be served by both concepts is prediction. We may anticipate, for example, that neurotic A will experience anxiety in new social situations. Despite the fact that such a prediction will be inevitably imprecise and fallible, it does enable us to narrow down “the range of possible choices and actions” (Goldie, 2004, p. 67) we can expect from a particular agent.

In fact, predictions regarding human behavior are notoriously inaccurate “guesses” – note the inability of the Psych Industry to identify mass shooters before they act.) 

The notions of character and personality, furthermore, can be employed as a means of judgment. At this point, however, an important qualification needs to be made. If this function is to be assigned to both concepts it can be understood only in a weak sense of judging as providing an instrumental assessment. The ascription of personality traits of neuroticism and extraversion to A and B, respectively, can be used to say that A would not make a good candidate for an assertiveness coach, whereas B may deserve a try in team work. It falls short, however, of any moral judgment, which can be made only by means of character-related notions.

The concepts of personality and character, finally, can both be used to provide explanation. We can, for instance, say that C was chosen as a team leader because he/she is just and expected to deal fairly with potential conflicts. Having assigned an explanatory role to “character” and “personality,” however, one should necessarily remain fully aware of the experimental results reported in the first section. An appeal to “character” and “personality” as explanatory constructs does not have to mean that they provide the whole explanation. Situational factors still count and, as a matter of fact, one may need to acknowledge that in a “great many cases … [they] will be nearly the entire explanation” (Kupperman, 1991, p. 59).

One other reservation concerns the kind of explanation conveyed by the personality- or character-trait ascription. Human behavior, in particular, can be explained in at least two distinct ways (e.g., Gill, 1990, p. 4). Explanation, to begin with, can be made along the lines of the deductive-nomological model and refer to causes and natural laws. In such cases it is not substantially different from explanations of natural facts (like an earthquake) offered by the sciences. And it is this kind of explanation that is provided when non-reason-responsive features of personality are appealed to (cf. Goldie, 2004, p. 66).

Human action, however, can be also made comprehensible by the reference to reasons behind it. If we know what a person in question “values or cares for,” in particular, we can “make sense [emphasis added] of the action, or make the action intelligible, understandable or rational [emphasis added]” (Goldie, 2004, p. 65). Such an “explanation” can be given by the indication of only those traits, which are reason-responsive and, strictly speaking, is much closer to Dilthey’s (1894/2010) understanding (Verstehen) than to naturalistically understood explanation (Erklären).

The functions of description, prediction, instrumental assessment, and explanation (at least as far as the latter is understood in terms of causes) are common to both concepts of “personality” and “character.” The latter notion, however, can serve some additional purposes, which give it a kind of functional autonomy. Among the character-specific functions, to begin with, there is moral judgment. When we say that C is just and D is cruel we don’t make an instrumental and task-relative assessment. Rather, we simply evaluate that C is a morally better person than D (other things being equal). With this, the function of imposing moral responsibility is often connected. The issue of the validity of such an imposition is very complex and controversial. Still, it does remain a discursive fact that the claim that D is cruel is usually associated with holding D, at least to some extent, responsible for his/her cruelty.

Note that “pathological, disordered, mentally ill, socially defective” are labels every bit as “moral / judgmental” in the “real social environment” as  “sinful, perverted, possessed by demons, Godless atheist, or an “agent of Satan” 

The functions of moral judgment and moral responsibility ascription are not typically served by the scientific notion of personality. They may, however, become formally similar to description, explanation, and prediction if they are, as is often the case, applied within mostly third-personal language (as judging others and imposing responsibility on others). Apart from these functions, however, the notion of character can fulfill some essentially first-personal kind of purposes. And it is the latter that seems to be its most specific feature.

Among the first-personal functions ofcharacter,” identification is fundamental, both psychologically and conceptually. When a person identifies with a character trait or, more holistically, with a complete character ideal, she begins to consider such a trait or character as a part of her identity (cf. Goldie, 2004, pp. 69–70): as something she “decides to be or, at least, to see herself as being” (Kupperman, 1991, p. 50). Such an identification, if serious, is very rich in consequences: it establishes “the experienced structure of the world of direct experience as a field of reasons, demands, invitations, threats, promises, opportunities, and so on” (Webber, 2013, p. 240) and helps one to achieve a narrative unity of one’s life (cf. Goldie, 2004; Kuppermann, 1991; McKinnon, 1999).

First-personal functions of the character notion, additionally, enable the agent to undertake more specific self-formative acts such as evaluating oneself against the idealized self, structuring moral progress, or providing motivation needed to cope with the difficulties of moral development. The notion of character employed in such a way becomes a kind of an internalized regulative ideal with a considerable emotional, imaginative, and narrative dimension. Its specific purposes are self-evaluative, self-prescriptive, and self-creative (rather than descriptive, predictive, and explanatory). The criteria of its assessment, accordingly, should be at least partially independent from those proper to strictly scientific constructs.

The latter fact, as may be worthwhile to mention, has a direct bearing on the challenge of situationism mentioned at the beginning of these analyses. The arguments in favor of this disquieting position have typically referred to experiments indicating that situational variables possess much bigger explanatory and predictive value than those related to personality and concluded that the usefulness of the personality concept needs to be seriously questioned. The doubts concerning the notion of character usually followed without further ado. No special attention, in particular, was paid to the assumption that the concepts of character and personality fulfill the same functions of description, explanation, and prediction. Accordingly, it was usually taken for granted that the failure of the latter concept automatically entails the uselessness of the former.18 As far as it is admitted that such an approach is at least partially erroneous, it may be worthwhile to refocus the debate towards the specific, first-personal, and normative functions of the notion of character. Do we need the latter to perform them and, if so, does this notion really serve us well, even though it is scientifically weak?

Some final remarks

An important clarification, however, that needs to be made here is that any skepticism concerning the fact–value dichotomy suggested by some features of thick concepts should not be conceived by the psychologists as a call to develop a prescriptive and moralistic science of character and, thus, to become “like priests” (too late: this is where American Psychology stands today)  (Charland, 2008, p. 16). A false impression that it is the case might result from the conflation between the full-fledged version of the fact–value distinction and the original, and relatively modest, Humean dictum that “no valid argument can move from entirely factual premises to any moral or evaluative conclusion” (MacIntyre, 2013, p. 67).19

That it is the latter that most psychologists care about can be clearly seen in two recent papers by Kendler (1999, 2002), who issues a stern warning that any psychological project developed along the lines of what he calls “the enchanted science”20 and motivated by the belief that psychology itself can discover moral truths can lead not only to Gestalt psychology’s holism or humanistic psychology, but also to the quasi-scientific justification of “Nazi and Communist ideology” (1999, p. 828). And it is in order to prevent these kinds of abuses that Kendler (1999) refers to what he calls “the fact/value dichotomy” or “an unbridgeable chasm between fact and values” (p. 829). By this, however, he does not seem to mean anything more than that “empirical evidence can validate factual truth but not moral truth” (p. 829). An example he provides considers the possibility of obtaining reliable empirical data supporting the thesis that bilingual education is advantageous for ethnic identification, but disadvantageous for academic development. Such data, as he rightly insists, would still leave it to the society to decide which value, ethnic identification or academic progress, should be given priority.

All of this, however, does not need to lead one to the acceptance of the fact–value dichotomy in the strong version that has been criticized by Putnam, McDowell, and others. Rather, it is the is–ought dichotomy which seems to be sufficient. The subtle differences between these two distinctions have been clarified by Dodd and Stern-Gillet (1995) who argue that the Humean dictum can be best understood as a general logical principle without any substantive metaphysical dimension of the kind usually connected with the fact–value dichotomy. That the is–ought gap is narrower and weaker is also illustrated by the fact that it is confined to “ought” statements with a considerable amount of other evaluative statements left aside. The examples provided by the authors are the aesthetic language of art and, importantly, the virtue ethical discourse of character. And as the ascription of beauty to a painting does not automatically entail any particular prescription,21 so does the assignment of courage or foolishness to a person. Even though such a feature of the characterological language has often happened to be conceived as a weakness within metaethical contexts, it can be arguably beneficial to all those psychologists who want to study the complexities of character without making an impression that any particular normative position can be derived from purely scientific study. A substantial amount of normativity, as shown by the example of thick concepts, will obviously remain inevitable, but it is certainly worthwhile to emphasize that it is mostly placed before empirical research as an evaluative framework taken from elsewhere and, thus, subjected to criteria and authorities of a non-empirical nature.

This paper has been written during a visit to Oxford University’s Faculty of Philosophy. I am greatly indebted to Edward Harcourt for all his help and support.

Accidental beliefs / Where were you born?

Most people don’t choose their beliefs; their beliefs are culturally inherited. 

SEE ALSO: “Religious States of America, in 22 maps” 

https://www.washingtonpost.com/blogs/govbeat/wp/2015/02/26/the-religious-states-of-america-in-22-maps/?utm_term=.d454eebb7f71

A Decisive Moment for an Asperger Child / Re-Post

imagesLUQ8KKG2My cousin Bette hated her hair because it was so curly that she shrieked and whimpered whenever my aunt yanked a comb through it. My mother loved Bette’s red hair, but regretted my fence-straight bob. The tone of voice she used when referring to my straight hair was an accusation – I made it grow that way.

The hair situation had nothing to do with an important event that happened during a visit to my mother’s sister in Pennsylvania, which happened to coincide with Vacation Bible School. I don’t recall the denomination my relatives supported (there are so many), but the audience didn’t stand, kneel, or sing much. Instead of real wine, grape juice was passed around in paper cups with a tray of white bread croutons.

This scandalized my mother. How could materials available at any grocery store be expected to turn into the blood and flesh of Jesus Christ? Before marrying, my mother had sung professionally in churches: based on those experiences, she had chosen to align our family with the Episcopalians, because not only the priests and acolytes got dressed up, so did the audience, and she still got to sing beautiful songs.

My mother (and the other Episcopalian women) took advantage of God’s demand that women wear hats to church to amass vast collections of seasonal head gear. Judging by the extravagant and expensive hats bobbing about in church, I suspected that it was mortal women who had actually made up the rule, not God.

“Wear the Donald Duck hat,” I would tell my mother whenever we were late for church and she couldn’t decide which hat to wear. The Donald Duck hat was woven from white straw with a blue bill that jutted out above her forehead.

Vacation Bible School had nothing to do with hats, and my attendance could not be prevented by a plea for exemption. Even humor failed. My mother had noticed a reluctant streak in her daughter whenever it came time to cooperate with formal institutions and she insisted that I join my cousin in one more attempt at forced religious indoctrination.

My red-haired cousin and I were dropped off outside the church, where we were seated at a picnic table with kids our age. Adults handed each of us a board covered with blue felt, plus pictures of Jesus and a few loose sheep. Paper cut-out Jesus had typical Sunday school eyes, the kind that look nowhere and everywhere, but which have the power to pry into the shallow secrets of the boring human brain. The sheep were suitably adorable and adoring.

The adults directed us to stick the paper figures to the felt board. No reason was given as to why we should do this. I looked to my cousin and the others, expecting one of them to ask the adults why we were doing this, but the rest were busy deciding whether Jesus should float above the flock near heaven, or to have the sheep crowd around his temporarily earth-bound feet.

I tilted my board for a better look and a breeze caught the pictures. Jesus floated onto the grass. Cousin Bette screamed: “Look what you did! You let Jesus touch the ground!”

Another girl shrieked, “Pick him up. Quick!” as if the three second rule applied to religious pictures as well as to gum.

“Stop shouting,” I told my cousin. “It’s just a piece of paper.”

“No-it-is-not! It’s Jesus, and you let him touch the ground: You are in big trouble!”

“God is gonna punish you,” the other girl gasped.

A feeling passed through me, as if I been removed to a foreign universe, where simple pieces of paper are possessed by invisible beings and small girls are punished by tyrants for trifles.

Of course, at that age, I didn’t think this out, but I surely sensed what had just happened, and it had nothing to do with standing and kneeling; with the squabble over wafers and Wonder Bread, real wine or Welch’s grape juice, or with a rule that said women’s hair had to be covered with shame. Bette and the other children had been taught to fear imaginary entities and to believe that pieces of paper have supernatural power. Did adults lie to children, or did they really believe such things? The unease that had pestered me when adults spoke about ‘God things’ was sharpened into Ah-ha! focus.

My father hedged when I asked him for an explanation. His avoidance told me that his mind was not united in his approach to the world; the engineer wanted to confirm my suspicions of sheer puffery, but deep inside a superstitious and primal fear haunts all people. Collusion in these matters is required by society regardless of personal belief.

A custom developed between us. “Well you know and I know, but keep it quiet around your mother.”

Cousin Bette was correct about being in big trouble, but not in the way she had imagined. Never again would I feel comfortable with people who let crazy ideas rule their minds. Although my questioning nature was sometimes rewarded in school, skepticism in matters of religion would need to be stifled in public, a Herculean task for an Asperger child. A tiny raft of reason and cunning that lay hidden in my brain would ever after have to support me on a journey that led away from my own kind.

img137

We don’t really know children as individual expressions of the human experiment, because we do our best as a society to never let that person emerge.

 

Just what is the problem between Asperger types and Neurotypicals?

I’ve been posting for three years now on the bizarre insistence by neurotypicals  that the very existence of Asperger types is an affront to “their species.” I’ve also tried to convey how the myriad ridiculous, destructive and irrational things that NTs “believe and do” drive us equally batty. The details of this stupid situation are mind-boggling and confounding, but there is one simple difference in motivation that lies at the bottom of all this “blah, blah.”

Neurotypicals do whatever makes them feel good; they will “believe in” whatever cruel and idiotic nonsense gives them permission to do whatever makes them feel good.  

Of course, 7 billion people doing / believing whatever makes them feel good inevitably creates conflict. It also makes solving problems impossible; the “non-solution” is application of force and violence. The prime NT commandment is: “Destroy whoever doesn’t do or say what makes you feel good.

This makes us avoid NTs, because the need to eradicate any and all opposition makes them  dangerous.  

Asperger types are interested in how the universe  works, whether or not the “discovery” of how things work makes us feel good or not. Why? Because knowing how things work allows for making things better.

The result is that we contradict what NTs must be told (or else!), which is, “Yes, you’re right; the universe and everything in it exists to make you feel good. I am your slave.”

 

 

 

 

 

Shoddy Psychology Study Fails / The Reproducibility Project

From the Atlantic: Publishing shoddy psychology studies is a pervasive practice:  rationalizing unscientific behavior as “not all that bad” is journalistic fraud. Ed Yong is a noted science writer, and I’m a bit shocked at his pandering to the “psychology industry.” Most alarming is the failure to recognize that failed psychological theories, which form the basis of diagnosis and treatment, have harmed, and continue to harm, REAL LIVE PEOPLE.

How Reliable Are Psychology Studies?

A new study shows that the field suffers from a reproducibility problem, but the extent of the issue is still hard to nail down.

  • Ed Yong
  • Aug 27, 2015, The Atlantic

No one is entirely clear on how Brian Nosek pulled it off, including Nosek himself. Over the last three years, the psychologist from the University of Virginia persuaded some 270 of his peers to channel their free time into repeating 100 published psychological experiments to see if they could get the same results a second time around. There would be no glory, no empirical eurekas, no breaking of fresh ground. Instead, this initiative—the Reproducibility Project—would be the first big systematic attempt to answer questions that have been vexing psychologists for years, if not decades. What proportion of results in their field are reliable? (If psychologists are so concerned, why have they been defending sloppy methods and “religious” premises for decades?)

A few signs hinted that the reliable proportion might be unnervingly small. Psychology has been recently rocked by several high-profile controversies, including: the publication of studies that documented impossible effects like precognition, failures to replicate the results of classic textbook experiments, and some prominent cases of outright fraud.

The causes of such problems have been well-documented. Like many sciences, psychology suffers from publication bias, where journals tend to only publish positive results (that is, those that confirm the researchers’ hypothesis), and negative results are left to linger in file drawers. On top of that, several questionable practices have become common, even accepted. A researcher might, for example, check to see if they had a statistically significant result before deciding whether to collect more data. Or they might only report the results of “successful” experiments. These acts, known colloquially as p-hacking, are attempts to torture positive results out of ambiguous data. They may be done innocuously, but they flood the literature with snazzy but ultimately false “discoveries.” (Innocuously? How does one cook the books without being aware that one is doing so?)

In the last few years, psychologists have become increasingly aware of, and unsettled by, these problems. Some have created an informal movement to draw attention to the “reproducibility crisis” that threatens the credibility of their field. Others have argued that no such crisis exists, and accused critics of being second-stringers and bullies, (here come the social excuses) and of favoring joyless grousing over important science. In the midst of this often acrimonious debate, Nosek has always been a level-headed figure, who gained the respect of both sides. As such, the results of the Reproducibility Project, published today in Science, have been hotly anticipated. (We cannot assume that Nosek is unbiased)

They make for grim reading. Although 97 percent of the 100 studies originally reported statistically significant results, just 36 percent of the replications did.

Does this mean that only a third of psychology results are “true”? Not quite. A result is typically said to be statistically significant if its p-value is less than 0.05—briefly, this means that if you did the study again, your odds of fluking your way to the same results (or better) would be less than 1 in 20. This creates a sharp cut-off at an arbitrary (some would say meaningless) threshold, in which an experiment that skirts over the 0.05 benchmark is somehow magically more “successful” than one that just fails to meet it. (Apply math to garbage – you get garbage.)

So Nosek’s team looked beyond statistical significance. They also considered the effect sizes of the studies. These measure the strength of a phenomenon; if your experiment shows that red lights make people angry, the effect size tells you how much angrier they get. And again, the results were worrisome. On average, the effect sizes of the replications were half those of the originals.

“The success rate is lower than I would have thought,” says John Ioannidis from Stanford University, whose classic theoretical paper Why Most Published Research Findings are False has been a lightning rod for the reproducibility movement. “I feel bad to see that some of my predictions have been validated. I wish they’d been proven wrong.” This is a social statement; a “white lie.” 

Nosek, a self-described “congenital optimist,” is less upset. The results aren’t great, but he takes them as a sign that psychologists are leading the way in tackling these problems. “It has been a fantastic experience, all this common energy around a very specific goal,” he says. “The collaborators all contributed their time to the project knowing that they wouldn’t get any credit for being 253rd author.” Another social statement; not about the problem, but “how fun” it was – and how “socially tuned to reward” the participants are.

There are many reasons why two attempts to run the same experiment might produce different results. (Let’s rationalize – soften, undo, explain away- appalling institutional behavior)

Jason Mitchell from Harvard University, who has written critically about the replication movement, agrees. “The work is heroic,” he says. “The sheer number of people involved and the care with which it was carried out is just astonishing. This is an example of science working as it should in being very self-critical and questioning everything, especially its own assumptions, methods, and findings.” (Says nothing concrete: another social statement -ass-kissing)

But even though the project is historic in scope, its results are still hard to interpret. (REALLY? ) Let’s say that only a third of studies are replicable. What does that mean? It seems low, but is it? “Science needs to involve taking risks and pushing frontiers, so even an optimal science will generate false positives,” says Sanjay Srivastava, an associate professor of psychology at the University of Oregon. “If 36 percent of replications are getting statistically significant results, it is not at all clear what that number should be.” (That is – IT’S ARBITRARY)

It is similarly hard to interpret failed replications. Consider the paper’s most controversial finding: that studies from cognitive psychology (which looks at attention, memory, learning, and the like) were twice as likely to replicate as those from social psychology (which looks at how people influence each other). “It was, for me, inconvenient,” says Nosek. “It encourages squabbling. Now you’ll get cognitive people saying ‘Social’s a problem’ and social psychologists saying, ‘You jerks!’” (That is, the results must be “socially acceptable” to the “psychology community” – no hurt feelings! Do proper science, and a lot of people are going to be unhappy.)

Nosek explains that the effect sizes from both disciplines declined with replication; it’s just that cognitive experiments find larger effects than social ones to begin with, because social psychologists wrestle with problems that are more sensitive to context.  (Especially when the “context” is imaginary, as we see in autism / Asperger studies) “How the eye works is probably very consistent across people but how people react to self-esteem threat will vary a lot,” says Nosek. Cognitive experiments also tend to test the same people under different conditions (a within-subject design) while social experiments tend to compare different people under different conditions (a between-subject design). Again, people vary so much that social-psychology experiments can struggle to find signals amid the noise. (No problem: Just make them up!)

More generally, failed replications don’t discredit the original studies, any more than successful ones enshrine them as truth. There are many reasons why two attempts to run the same experiment might produce different results. There’s random chance. The original might be flawed. So might the replication. There could be subtle differences in the people who volunteered for both experiments, or the way in which those experiments were done. And, to be blunt, the replicating team might simply lack nous or technical skill to pull off the original experiments.

Indeed, Jason Mitchell wonders how good the Reproducibility Project’s consortium would be at replicating well-known phenomena, like the Stroop effect (people take longer to name the color of a word if it is printed in mismatching ink) or the endowment effect (people place more value on things they own). “Would it be better than 36 percent or worse? We don’t know and that’s the problem,” he says. “We can’t interpret whether 36 percent is good, bad, or right on the money.”

The very notion that there is a “correct” percentage of reproducible studies is so UNSCIENTIFIC that it reveals the lack of science-based activity in psychology: this belief renders the entire field “superstitious.” A study is reproducible or it isn’t: to believe that is some number of “reproducible” studies “justifies” what you are doing, is utter nonsense. 

Mitchell also worries that the kind of researchers who are drawn to this kind of project may be biased towards “disproving” the original findings. How could you tell if they are “unconsciously sabotaging their own replication efforts to bring about the (negative) result they prefer?” he asks. (Another social statement – )

In several ways, according to Nosek. Most of the replicators worked with the scientists behind the original studies, who provided materials, advice, and support—only 3 out of 100 refused to help. (This proves nothing) The teams pre-registered their plans—that is, they decided on every detail of their methods and analyses beforehand to remove the possibility of p-hacking. Nosek also stopped the teams from following vendettas (Wow! There’s a revealing statement of personality and character) by offering them a limited buffet of studies to pick from: only those published in the first issue of three major psychology journals in 2008. Finally, he says that most of the teams that failed to replicate their assigned studies were surprised—even disappointed. “Anecdotally, I observed that as they were assigned to a task, they got invested in their particular effect,” says Nosek. “They got excited. Most of them expected theirs to work out.” Again – a social statement meant to support the results, but having nothing to do with the actual quality of work. What are these people, 5 year-olds?)

“Journals, funders, and scientists are paying a lot more attention to replication, to statistical power, to p-hacking, all of it.”

And yet, they largely didn’t. “This was surprising to most people,” says Nosek. “This doesn’t mean the originals are wrong or false positives. There may be other reasons why they didn’t replicate, but this does mean that we don’t understand those reasons as well as we think we do. We can’t ignore that. We have data that says: We can do better.” (Are you kidding? Denial, denial, denial.)

What does doing better look like? To Dorothy Bishop, a professor of developmental neuropsychology at the University of Oxford, it begins with public pre-registration of research plans. “Simply put, if you are required to specify in advance what your hypothesis is and how you plan to test it, then there is no wiggle room for cherry-picking the most eye-catching results after you have done the study,” she says. (And what if “cheaters” are caught? Do they get sent to time-out?) Psychologists should also make more efforts to run larger studies, which are less likely to throw up spurious results by chance. Geneticists, Bishop says, learned this lesson after many early genetic variants that were linked to human diseases and traits turned out to be phantoms; their solution was to join forces to do large collaborative studies, involving many institutes and huge numbers of volunteers. These steps would reduce the number of false positives that marble the literature.

To help detect the ones that slip through, researchers could describe their methods in more detail, and upload any materials or code to open databases, making it trivially easy for others to check their work. “We also need to be better at amassing the information we already have,” adds Bobbie Spellman from the University of Virginia. Scientists already check each other’s work as part of their daily practice, she says. But that much of that effort is invisible to the wider world because journals have been loath to publish the results of replications. (You cuddle my data, I’ll cuddle yours.)

Change is already in the air. “Journals, funders, and scientists are paying a lot more attention to replication, to statistical power, to p-hacking, all of it,” says Srivastava. He notes that the studies that were targeted in the Reproducibility Project all come from a time before these changes. “Has psychology learned and gotten better?” he wonders.

One would hope so. After all, several journals have started to publish the results of pre-registered studies. In a few cases, scientists from many labs have worked together to jointly replicate controversial earlier studies. Meanwhile, Nosek’s own brainchild, the Center for Open Science established in 2013, has been busy developing standards for transparency and openness. It is also channelling $1 million of funding into a pre-registration challenge, where the first 1,000 teams who pre-register and publish their studies will receive $1,000 awards. “It’s to stimulate people to try pre-registration for the first time,” he says. (This is like High School for drop outs – get extra credit for behavior that you ought to have displayed from the start.)

The Center is also working with scientists from other fields, including ecology and computer science, to address their own concerns about reproducibility. Nosek’s colleague Tim Errington, for example, is leading an effort to replicate the results of 50 high-profile cancer biology studies. “I really hope that this isn’t a one-off but a maturing area of research in its own right,” Nosek says.

$$$$$$$$$$$$$$$$

That’s all in the future, though. For now?I will be having a drink,” he says. (Stat quo.)

 

Self Awareness / OMG What a Hornet’s Nest

What made me awaken this morning with the question of self awareness dancing in my head? It’s both a personal and social question and quest, and so almost impossible to think about objectively. And like so many “word concepts” there is no agreed-upon definition or meaning to actually talk about, unless it’s among religionists of certain beliefs, philosophical schools of knowledge, or neurologists hunched over their arrays of brain tissue, peering like haruspices over a pile of pink meat.

My own prejudices lean toward two basic underpinnings of self-awareness:

1. It is not a “thing” but an experience.

2. Self awareness (beyond Look! It’s me in the mirror…) is learned, earned, created, achieved.

From a previous post –

Co-consciousness; the product of language : “In Western cultures verbal language is inseparable from the process of creating a conscious human being.

A child is told who it is, where it belongs, and how to behave, day in and day out, from birth throughout childhood. In this way culturally-approved patterns of thought and behavior are implanted, organized and strengthened in the child’s brain. 

Social education means setting tasks that require following directions, and asking children to ‘correctly’ answer with words and behavior, to prove that co-consciousness is in place.

This is one of the great challenges of human development, and children who do not ‘pay attention’ to adult demands, however deftly sugar-coated, are rejected as defective, defiant, and diseased.

Punishment for having early self awareness may be physical or emotional brutality or abandonment and exile from the group.”

Who am I? is a question that most children ask sooner or later – prompted obviously by questions from adults (no child is born thinking about this) such as “What do you want to be when you grow up?” (Not, Who are you now?) The socially acceptable menu is small: “A famous sports star” for boys, ” For girls? “A wonderful mom and career woman who looks 16 years old, forever”.

How boring and unrealistic. How life and joy killing. Adults mustn’t let children in on the truth, which is even worse. We know at this point that a child can look in a mirror and say, “That’s me! I hate my haircut,” but he or she is entirely unaware that someday firing rockets into mud brick houses, thereby blowing human bodies to smithereens, may be their passion. Or she may be a single mom with three kids, totally unprepared for an adequate job. Or perhaps he or she may end up addicted to pills and rage and stuffing paper bags with French fries eight hours a day.

If a child were to utter these reasonably probabilistic goals, he or she would be labeled as disturbed and possibly dangerous. And yet human children grow up to be less than ideal, and many  dreadful outcomes occur, but these are the result of the individual colliding with societal fantasies and promises that are not likely outcomes at all.

The strangest part of this is that we talk about self awareness as a “thing” tucked into a hidden space, deep with us, but it isn’t. It is a running score on a test, that once we are born, starts running: the test questions are life’s demands, both from the environment into which we are born, and the culture of family, school, work and citizenship. The tragedy is that few caregivers bother to find out enough about a child to guide them toward a healthy and happy self-awareness. This requires observing and accepting the child’s native gifts and personality, AND helping them to manage their difficulties. This is not the same as curing them of being different, or inflicting life long scars by abandoning them, or diligent training so that like parrots, they can mimic conformist behavior and speech.

Self awareness comes as we live our lives: self-esteem is connected to that process, not as a “before” thing, but an “after” thing: a result of meeting life as it really is, not as a social fantasy. Self awareness is built from the talents and strengths that we didn’t know  we possessed. It also arises as we see the “world” as its pretentions crumble before us. Being able to see one’s existence cast against the immensity of reality, and yet to feel secure, is the measure of finally giving birth to a “self”. 

 

 

 

I’m satisfied that loving the land is my talent and that this is not a small thing, when there are so many human beings who don’t.

The Debate Over Sensory Processing Disorder vs. Autism / Aye, yai, yai!

This debate is just one more “Catholics vs. Protestants” type religious war over “who owns the hearts, minds and fates of children” – and their $$ insurance coverage. I wish for once that genuine scientific thinking – and compassion had some influence on reproduction and the health of fetuses, infants, children, young adults and their families. (We adults are on our own in this NT – produced nightmare of irrational – supernatural thinking) LOL

Sensory processing disorder is a condition in which the brain has trouble receiving and responding appropriately to information that comes in through the senses.

Well! That’s certainly a well-defined “thingy”

Here is a list of links:

https://childmind.org/article/the-debate-over-sensory-processing/

http://chan.usc.edu/academics/sensory-integration/history-and-theory

https://www.spdstar.org/basic/symptoms-checklist

https://www.frontiersin.org/articles/10.3389/neuro.07.022.2009/full

https://autismawarenesscentre.com/the-dsm-v-and-sensory-processing-disorder/

http://blogs.discovermagazine.com/crux/2014/04/04/floating-away-the-science-of-sensory-deprivation-therapy/#.WvyIl0xFyUk

https://www.spectrumnews.org/features/talking-sense-what-sensory-processing-disorder-says-about-autism/

https://www.psychologytoday.com/us/blog/creative-development/201107/sensory-processing-disorder

http://www.ascentchs.com/developmental/sensory-processing/symptoms-signs-effects/

and many, many more….

HAVE FUN!

Note the “similarities” between SPD and ASD – and the “socio-religious message” that any child who falls on either side of socially conformist behavior on a Bell curve is “defective” 

This extensive chart sums it up well: AMERICANS HATE CHILDREN and other living things. LIFE is a sin.

Why does GOD let people starve to death? / Insane Neurotypical Christian Response

FROM “Not Ashamed of the Gospel” website. (You ought to be ashamed…)  https://notashamedofthegospel.com/apologetics/why-god-doesnt-feed-all-starving-children/

3 Strange But True Reasons Why God Doesn’t Feed All the Starving Children in The World

Peter Guirguis / Apologetics 240 Comments

OMG! I will never apologize for being Asperger or Atheist. This is how “normal neurotypicals” see the world; the universe is a supernatural monstrosity.

Evil exists, but not in Nature; it is the consequence of the beliefs and behavior of Modern Social Homo sapiens. Why isn’t this dangerous “mental derangement” not featured in the DSM, and yet Autism is?

God, Can You Please Make it Rain Turkey and Gravy?

If God is all-powerful, then can’t He make it rain turkey and gravy from heaven to feed all the starving kids in the world? The answer is that of course God can do that if that’s what He wanted to do. But since God doesn’t make it rain turkey and gravy upon the starving kids around the world, then we have to ask, ”Why doesn’t He?”

If you’re not able to answer this question, then one of two things is going to happen to you. You’re going to struggle with your faith because you’re going to have doubts that God is a good God. Or you’re never going to find out the truth about God, and you’ll make the mistake of thinking that God doesn’t exist.

This article is for you if:

1. You’ve ever wondered why God doesn’t feed starving kids around the world, and you struggle with the answer.2. You’re skeptical of the Christian God or other gods. 3. You want to be able to answer this question when it’s asked of you in an accurate and positive way.

Why The “Strange But True” Title? The reason I call these reasons that I’m about to share with you “strange” is because if I were God, I would do things differently. But thank goodness, I’m not God. (OMG!)

What may be strange to one person may not be considered strange to another. So depending on how familiar you are with this subject, (NT insanity?) you may agree with me that these reasons are “strange but true”, or you may not. Either way, I hope this will spark a good dialog about this topic. (Totalitarian demand for obedience to supernatural hallucinations is a really good jumping off point for “good dialog”!)

I’ve thought of three different reasons why God doesn’t feed the starving children of the world.

Reason #1 – It Isn’t God’s Responsibility to Feed the Starving Children of the World

Every year, I have the privilege of going through the one-year Bible plan. That means that I will read the entire Bible in one year. I don’t share this to impress you. But I do share it to establish that I’m quite familiar with the Bible. Of all the times that I have read the Bible from cover to cover, I can’t think of a single Bible verse in which God makes a promise to feed all the starving children in the world. (But there are threats that “God” will make people eat their own children!) So when somebody accuses God of being unjust because He has the capability to feed starving children, and He doesn’t, then it’s that person that has a misunderstanding of God. (No misunderstanding here: your imaginary master is a true psycho-sociopath)

GOD: “Hey, it’s not MY JOB to control the vicious uncaring assholes I made in my image. LOL!” 

If God Isn’t Responsible For Feeding Starving Children, Then Who Is?

The answer is you and me. I can think of numerous Bible verses in which God instructs His children to feed the poor people of the world.

And Christians are doing such a great job of it! Bomb entire nations into a state that can only be called “Hell on Earth”, and then send “missionaries of democracy” with bags of leftover “dog food”. Take photos: lie, brag about how “empathetic” and compassionate you and your “god” are. And of course, “profit” from the crimes. 

Proverbs 28:27 says, “He who gives to the poor will not lack, But he who hides his eyes will have many curses.” James 2:15-16 says, “If a brother or sister is naked and destitute of daily food, and one of you says to them, ‘Depart in peace, be warmed and filled,’ but you do not give them the things which are needed for the body, what does it profit?” So if you’re one of those people that thinks God should feed the starving kids around the world, then you are shifting the responsibility.

God isn’t responsible for feeding starving children, you and I are. Then why not demonstrate ethical behavior by refraining from creating mass suffering by  committing predatory wars, practicing profitable poverty as “economics” and enforcing starvation? 

Reason #2 – God Isn’t Like Humans

Atheists make a mistake when they say things like, “If I saw a starving child and had the power to feed him and I don’t, then I am evil. (Uh-yeah! That logically is cruel uncaring behavior) That’s the same thing with God, He is evil because He has the power to feed starving children and He doesn’t.” (You said it! Why not believe your own “instincts” about all this Christian “we’re the good guys” social evil?)

The mistake that atheists make here is that they compare themselves to God, or they compare God to themselves. They put themselves in God’s shoes. (This is utterly BONKERS. God does not exist, and he certainly wouldn’t wear shoes if he did)

God’s goals are different than our goals. His purposes are different than our purposes. His way of justice is different than the human way of justice. But here’s the lesson that’s to be learned: any time you blame God for not doing something that you would do, you’are making an idol in your own image. (Christianity IS a religion of “idols”)

What does that mean? It means that you’re making up your own concept of how God is supposed to act, which is something the Bible warns us about. (My, my – mustn’t use what little intelligence humans have to realize that religion is a con game)

Reason #3 – God’s Justice is Coming Soon For All

You and I want to see justice have its way immediately. Think about all the hate crimes in the world, the rapes, and the murders. You and I want to see those people (Christians commit hate crimes, rape, murder and a long list of heinous behaviors, as a matter of religious and political policy) get what they deserve.

But while we judge others for their heinous crimes, we overlook the sins that we commit in God’s eyes. While God does see hate crimes, rapes, and murders as sins, He also sees lying, cheating, and hating people as sins too. (Your god hates human beings and other living things)

So since God is a just God, then He’s going to have to give justice to all if He were to judge the world today. That means that there would be a lot of people who would receive punishment for eternity for breaking God’s standards. (And how LOW these are!) So instead, God is saving His judgment for Judgment Day. That’s when everyone is going to get judged for what they did on earth.

Those who broke God’s standards and did not receive His son Jesus for salvation will end up going to hell.

This is deranged thinking by any standard; it expresses rage and hatred for all human beings; it’s sick, sadistic and “loves” torture. Why is “religious psychopathy” not in the DSM? 

 

But those who do put their faith and trust in Christ will end up going to heaven. So when you don’t see justice taking place immediately, it’s because God is giving everyone a chance to repent, and put their faith in Jesus Christ as Lord and Savior.

How About Other Reasons?

I have to admit, I’m not a know it all. That’s where you come in. Can you think of any other reasons why God doesn’t feed the starving kids around the world? (“He” is a hallucination: “He” doesn’t exist. Thank God!) 

Share them in the comments below.

I leave you to read the comments: I need to spend some time in Nature, where evil does not exist…

But millions of Americans believe it’s true…

 

“Wired Brains” / STOOPID Neurotypical Headlines

Intelligent people’s brains are wired differently: Researchers say ‘smart minds’ are more likely to be happy, well educated and earn more (EXCEPT IF YOU’RE ASPERGER, then you’re doomed) 

  • Scientists who analysed brain scan data on 461 volunteers 
  • Found patterns linked positive aspects of life, such as having a good memory and vocabulary, feeling satisfied, and being well educated
  • People at the other end of the scale were more likely to display negative traits including anger, rule-breaking, substance use and poor sleep quality (Dumb people are criminals)

___________

CAPTION: Male, Female Brains WIRED Differently

The problem with atheists: Not wired for direct connection to “God”

Bring your kids into AUTO CENTER to have their electrical system checked. FREE tire rotation with tune up. 

CAPTION: How millennials are WIRED. Note the little bridges connecting hemispheres of the brain. How cute!

Well, no wonder men and women can’t connect. “Red” electricity and “White” electricity originate in different universes.

    

Neurotypical wiring: DANGER

 

 

What is the Asperger “Blank Stare” all about? / Re-Post

What is the Aspie blank stare and why is it a disturbing facet of Aspie behavior?

Complaint from an Aspie ‘Mum’ about her son, decoded:

MUM: In my experience, I would get a blank stare when I asked (my Asperger son) a question.  It could be, for example, what he would like for dinner? What happened at school? You know – normal sorts of ‘Mum’ questions!

Answer: Social typical questions tend to be vague and non-specific. A specific question would be: “Would you like pizza or hot dogs for dinner?” Or try, “We’re having hamburgers for dinner. I bought the kind of buns you like and you can add tomatoes or pickles or cheese, or whatever else you like.”  “What stories did you read in reading class today?”

MUM: How did I interpret the blank stare that I got?

At the time, I believed that ‘the blank stare’ was used by (SON) to avoid answering the questions I asked questions I thought were easy to answer! I realize now, that in my frustration over not getting an answer, I would pile on the questions one after another, and (SON) didn’t have time to process even the first one! I would get cross with him, frustrated that he seemed to refuse to respond to my requests for information, and I would give up.

Answer: One of the big mistakes that social typicals make is to attribute INTENT to Asperger behavior. This is because social typicals are “self-oriented” – everything is about THEM; any behavior on the part of a human, dog, cat, plant or lifeform in a distant galaxy, must be directed at THEM. Example: God, or Jesus, or whomever, is paying attention 24/7 to the most excruciatingly trivial moments in the lives of social typicals. We’re not as patient as God or Jesus.

The Asperger default mental state is a type of reverie, day-dreaming, trance or other “reflective” brain process; that is, we do “intuitive” thinking. The “blank face” is because we do not use our faces to do this type of thinking. 

Sorry – we’re just busy elsewhere! When you ask a question, it can take a few moments to “come out of” our “reverie” and reorient our attention. If you are asking a “general question” that is meant to elicit a “feeling” (social) response, it will land like a dead fish in front of us. Hence the continued “blankness”.  

MUM: What is the real cause of the blank stare?

I believe that SON uses the blank stare while he is processing a question. If give him enough time, he will think deeply, and consider his response, which is often unexpected.

Answer: The “blank stare” is due to our type of brain activity. We process questions; processing questions adds to response time. Some questions are so vague that we simply cannot answer them. Some questions aren’t questions at all, but are an attempt to get our attention and to get a “social” something from us. This is truly confusing. 

MUM: (I’m told that) at any given moment an Aspie is taking in lots of information from the world around them. They notice details that normal people ignore. These details can easily result in sensory overload. The blank stare is used by Aspies as a way to ‘zone out’, or ‘go into themselves’ as a coping mechanism for when their senses are overloaded.

Answer: Not correct (in my experience). Sensory overload is another matter entirely; sensory overload results in the desire to flee, and if we can’t “get away” we experience meltdown. Other Aspies may have a different take on this.

Aspie chat concerning “The Stare”

“I watched “Rain Man” again recently. There was a scene where Dusty was sitting on a park bench and just looking at the ground, and Tom Cruise started YELLING at him. I felt like, “Hey ! sometimes I just sit and think about things, and maybe I’m staring at the ground, so cool it Tom.” We tend to look off into the horizon while we’re talking, and really, it’s not a big deal …”

“At work I’ll be at my desk just working away and people will tell me to cheer up when I don’t feel at all down. Also, if I’m standing around somewhere, and not focusing on anything in particular – and feeling fine, someone will ask me if I’m OK or if I’m pissed off about something. Something about my neutral (not happy or sad, just contented) expression makes people think I’m depressed or angry.”

“People are always doing one of the following: Ask me if I’m okay because I’m staring off into the distance; look behind their back to see what I’m staring at; or tell me to “SMILE!” because I don’t have any facial expression.”

Yes, social typicals are self-centered and demanding. They don’t want to “put up with” a blank face; it damages their perfect narcissistic universe, in which it is everyone’s job to make them feel important.

And then, there is the other “eye” problem:

“I dont get it…..my teacher tells me to look at her when she talks and when I look at other people they tell me to stop staring at them. What the…?”

“Apparently staring and looking are two different things, not that I know how to tell the difference.”

The teacher demands eye-contact because it indicates OBEDIENCE – SUBMISSION. Authoritarian adults demand instant obedience from children. But if you stare at a  “regular” person, that causes another problem. You are claiming higher status; predators stare down prey; you, dear Aspie, are unwittingly behaving like a predator.

“I stare because I get easily distracted by details and I want to see more; it’s just attention to detail. I’m doing better at straight eye contact, but open my eyes too wide because I’m trying hard to focus and pay attention.”

“If I am interested in what a person is saying – it’s new to me or important information, I will stare like a laser. Also if I am trying to recognize someone that looks vaguely familiar, or there is something interesting about how they look and I want to examine it. If I’m not interested, I won’t look at them. However, that does not mean I am not listening just because I am not looking at them.”

It seems to me, that Aspies use our senses as nature intended: We use our eyes to see and we use our ears to listen.