What Mormons Believe About Jesus Christ / By The Mormons

 

The “thing” about the Mormons is that they can SOUND RATIONAL about the most IRRATIONAL “things” !!!

Add this post to: Why Asperger’s say that neurotypicals are stupid…

from: http://mormonnewsroom.org

Check out: http://templestudy.com/tag/holyofholies

The following excerpts are taken from an address to the Harvard Divinity School (Puritans)  in March 2001 by Robert L. Millet, former dean of religious education at Brigham Young University. It is offered on Newsroom as a resource.

What Do We Believe About Jesus Christ?

Latter-day Saints are Christians on the basis of our doctrine, our defined relationship to Christ, our patterns of worship and our way of life.

What Do We Believe About Christ?

  • We believe Jesus is the Son of God, the Only Begotten Son in the flesh (John 3:16). We accept the prophetic declarations in the Old Testament that refer directly and powerfully to the coming of the Messiah, the Savior of all humankind. We believe that Jesus of Nazareth was and is the fulfillment of those prophecies.
  • We believe the accounts of Jesus’ life and ministry recorded in Matthew, Mark, Luke and John in the New Testament to be historical and truthful. For us the Jesus of history is indeed the Christ of faith. While we do not believe the Bible to be inerrant, complete or the final word of God, we accept the essential details of the Gospels and more particularly the divine witness of those men who walked and talked with Him or were mentored by His chosen apostles.
  • We believe that He was born of a virgin, Mary, in Bethlehem of Judea in what has come to be known as the meridian of time, the central point in salvation history. From His mother, Mary, Jesus inherited mortality, the capacity to feel the frustrations and ills of this world, including the capacity to die. We believe that Jesus was fully human in that He was subject to sickness, to pain and to temptation.
  • We believe Jesus is the Son of God the Father and as such inherited powers of godhood and divinity from His Father, including immortality, the capacity to live forever. While He walked the dusty road of Palestine as a man, He possessed the powers of a God and ministered as one having authority, including power over the elements and even power over life and death.
  • We believe Jesus performed miracles, including granting sight to the blind, hearing to the deaf, life to some who had died and forgiveness to those steeped in sin. We believe the New Testament accounts of healings and nature miracles and the cleansing of human souls to be authentic and real.
  • We believe Jesus taught His gospel — the glad tidings or good news that salvation had come to earth through Him — in order that people might more clearly understand both their relationship to God the Father and their responsibility to each other.
  • We believe Jesus selected leaders, invested them with authority and organized a church. We maintain that the Church of Jesus Christ was established, as the Apostle Paul later wrote, for the perfection and unity of the saints (Ephesians 4:11–14).
  • We believe that Jesus’ teachings and His own matchless and perfect life provide a pattern for men and women to live by and that we must emulate that pattern as best we can to find true happiness and fulfillment in this life.
  • We believe Jesus suffered in the Garden of Gethsemane and that He submitted to a cruel death on the cross of Calvary, all as a willing sacrifice, a substitutionary atonement for our sins. That offering is made efficacious as we exercise faith and trust in Him; repent of our sins; are baptized by immersion as a symbol of our acceptance of His death, burial and rise to newness of life; and receive the gift of the Holy Ghost (Acts 2:37–38; 3 Nephi 27:19–20). While no one of us can comprehend how and in what manner one person can take upon himself the effects of the sins of another or, even more mysteriously, the sins of all men and women — we accept and glory in the transcendent reality that Christ remits our sins through His suffering. We know it is true because we have experienced it personally. Further, we believe that He died, was buried and rose from the dead and that His resurrection was a physical reality. We believe that the effects of His rise from the tomb pass upon all men and women. “As in Adam all die, even so in Christ shall all be made alive” (Corinthians 15:22).
  • We do not believe that we can either overcome the flesh or gain eternal reward through our own unaided efforts. We must work to our limit and then rely upon the merits, mercy and grace of the Holy One of Israel to see us through the struggles of life and into life eternal (2 Nephi 31:19; Moroni 6:4). We believe that while human works are necessary— including exercising faith in Christ, repenting of our sins, receiving the sacraments or ordinances of salvation and rendering Christian service to our neighbors — they are not sufficient for salvation (2 Nephi 25:23; Moroni 10:32). We believe that our discipleship ought to be evident in the way we live our lives.

In essence, we declare that Jesus Christ is the head of the Church and the central figure in our theology.

How Are We Different?

Latter-day Saints do not accept the Christ that emerges from centuries of debates and councils and creeds. Over the years that followed the death and resurrection of the Lord, Christians sought to “earnestly contend for the faith which was once delivered unto the saints” (Jude 1:3). We believe that the epistles of Paul, Peter, Jude and John suggest that the apostasy or falling away of the first-century Christian church was well underway by the close of the first century. With the deaths of the apostles and the loss of the priesthood, the institutional power to perform and oversee saving sacraments or ordinances, learn the mind of God and interpret scripture was no longer on earth. To be sure, there were noble men and women throughout the earth during the centuries that followed, religious persons of good will, learned men who sought to hold the church together and to preserve holy writ. But we believe that these acted without prophetic authority. 

In an effort to satisfy the accusations of Jews who denounced the notion of three Gods (Father, Son and Holy Ghost) as polytheistic, and at the same time incorporate ancient but appealing Greek philosophical concepts of an all-powerful moving force in the universe, the Christian church began to redefine the Father, Son and Holy Spirit. One classic work describes the intersection of Christian theology and Greek philosophy: “It is impossible for any one, whether he be a student of history or no, to fail to notice a difference of both form and content between the sermons on the Mount and the Nicene Creed. … The one belongs to a world of Syrian peasants, the other to a world of Greek philosophers. … The religion which our Lord preached … took the Jewish conception of a Father in heaven, and gave it a new meaning.” In short, “Greek Christianity of the fourth century was rooted in Hellenism. The Greek minds which had been ripening for Christianity had absorbed new ideas and new motives.”[i]

What is the result? Such Platonic concepts as the immutability, impassibility and timelessness of God made their way into Christian theology. (Yes, this is all true, but it’s ALL neurotypical madness, so what’s the point?) As one group of Evangelical scholars has stated: “Many Christians experience an inconsistency between their beliefs about the nature of God and their religious practice. For example, people who believe that God cannot change his mind sometimes pray in ways that would require God to do exactly that. And Christians who make use of the free will defense for the problem of evil sometimes ask God to get them a job or a spouse, or keep them from being harmed, implying that God should override the free will of others in order to achieve these ends. …

“These inharmonious elements are the result of the coupling of biblical ideas about God with notions of the divine nature drawn from Greek thought. The inevitable encounter between biblical and classical thought in the early church generated many significant insights and helped Christianity evangelize pagan thought and culture. Along with the good, however, came a certain theological virus that infected the Christian doctrine of God, making it ill and creating the sorts of problems mentioned above. The virus so permeates Christian theology that some have come to take the illness for granted, attributing it to divine mystery, while others remain unaware of the infection altogether.”[ii]

Latter-day Saints believe that the simplest reading of the New Testament text produces the simplest conclusion — that the Father, the Son and the Holy Ghost are separate and distinct personages, that They are one in purpose. We feel that the sheer preponderance of references in the Bible would lead an uninformed reader to the understanding that God the Father, Jesus Christ and the Holy Ghost are separate beings. That is, one must look to the third- and fourth-century Christian church, not to the New Testament itself, to make a strong case for the Trinity. Sounds kind of sane, (for neurotypicals) n’est-ce-pas? 

Some Distinctive Contributions

What, then, can the Latter-day Saints contribute to the world’s understanding of Jesus Christ? What can we say that will make a difference in how men and women view and relate to the Savior?

Now for the bat crap crazy stuff:

The First Vision

Joseph Smith’s First Vision represents the beginning of the revelation of God in our day. President Gordon B. Hinckley has observed: “To me it is a significant and marvelous thing that in establishing and opening this dispensation our Father did so with a revelation of himself and of his Son Jesus Christ, as if to say to all the world that he was weary of the attempts of men, earnest through these attempts might have been, to define and describe him. … The experience of Joseph Smith in a few moments in the grove on a spring day in 1820, brought more light and knowledge and understanding of the personality and reality and substance of God and his Beloved Son than men had arrived at during centuries of speculation.”[iii] By revelation Joseph Smith came to know that the Father, Son and Holy Ghost constitute the Godhead. From the beginning Joseph Smith taught that the members of the Godhead are one in purpose, one in mind, one in glory, one in attributes and powers, but separate persons.[iv]

There was reaffirmed in the First Vision the fundamental Christian teaching — that Jesus of Nazareth lived, died, was buried and rose from the tomb in glorious immortality. In the midst of that light that shone above the brightness of the sun stood the resurrected Lord Jesus in company with His Father. Joseph Smith knew from the time of the First Vision that death was not the end, that life continues after one’s physical demise, that another realm of existence — a postmortal sphere — does in fact exist.

The Book of Mormon

Through the Book of Mormon, translated by Joseph Smith, came additional insights concerning the person and powers of Jesus the Christ. We learn that He is the Holy One of Israel, the God of Abraham, Isaac and Jacob (1 Nephi 19:10) and that through an act of infinite condescension He left His throne divine and took a mortal body (1 Nephi 11; Mosiah 3:5). We learn from the teachings of the Book of Mormon prophets that He was a man but much more than man (Mosiah 3:7–9; Alma 34:11), that He had within Him the powers of the Father, the powers of the Spirit (2 Nephi 2:8; Helaman 5:11), the power to lay down His life and the power to take it back up again.

Another prophet, Alma, contributed the unfathomable doctrine that the Redeemer would not only suffer for our sins, but that His descent below all things would include His suffering for our pains, our sicknesses and our infirmities, thus allowing Him perfect empathy — “that his bowels may be filled with mercy, according to the flesh, that he may know according to the flesh how to succor his people according to their infirmities” (Alma 7:11–12). Truly, the Book of Mormon prophets bear repeated witness that the atonement of Christ is infinite and eternal in scope (2 Nephi 9:7; 25:16; Alma 34:11–12)

One could come away from a careful reading of the second half of the New Testament somewhat confused on the matter of grace and works, finding those places where Paul seems almost to defy any notion of works as a means of salvation (Romans 4:1–5; 10:1–4; Ephesians 2:8–10) but also those places where good works are clearly mentioned as imperative (Romans 2:6; James 2:14–20; Revelation 20:12–13). It is to the Book of Mormon that we turn to receive the balanced perspective on the mercy and grace of an infinite Savior on the one hand, and the labors and works of finite man on the other.

In the Book of Mormon, the sobering realization that no one of us can make it alone is balanced by a consistent statement that the works of men and women, including the receipt of the ordinances of salvation, the performance of duty and Christian acts of service — in short, being true to our part of the gospel covenant — though insufficient for salvation, are necessary. The prophets declared over and over that the day would come when people would be judged of their works, the works done “in their days of probation” (1 Nephi 15:32; 2 Nephi 9:44). That is, “all men shall reap a reward of their works, according to that which they have been — if they have been righteous they shall reap the salvation of their souls, according to the power and deliverance of Jesus Christ; and if they have been evil they shall reap the damnation of their souls, according to the power and captivation of the devil (Alma 9:28). In summary, the undergirding doctrine of the Book of Mormon is that we are saved by the grace of Christ “after all we can do” (2 Nephi 25:23), meaning above and beyond all we can do. As we come unto Christ by covenant, deny ourselves of ungodliness and love God with all our souls, His grace—His divine enabling power, not only to be saved in the ultimate sense but also to face the challenges of each day — is sufficient for us (Moroni 10:32).

The Book of Mormon has a high Christology; that is, the doctrine of Christ is thick and heavy on the pages of this scriptural record, and the testimony of the divinity of the Lord and Savior is powerful and direct. One cannot read the Book of Mormon and honestly come away wondering what the Latter-day Saints believe about the Divine Sonship. The Book of Mormon establishes clearly that “Jesus is the Christ, the Eternal God, manifesting himself to all nations” (Book of Mormon title page; 2 Nephi 26:12).

At the heart of the doctrine restored through Joseph Smith is the doctrine of the Christ. “The fundamental principles of our religion,” he observed, “are the testimony of the Apostles and Prophets, concerning Jesus Christ, that he died, was buried, and rose again the third day, and ascended into heaven; and all other things which pertain to our religion are only appendages to it.”[v] The glorious news, the glad tidings is that Christ our Lord has come to earth, offered Himself as a ransom from sin and made available deliverance from death and hell. We rejoice in the message of redemption that fell from the lips of Old and New Testament prophets. More especially we exult in the realization that knowledge and truth and light and understanding concerning Jesus Christ — who He was, who He is and what marvels have come to pass through Him — have been delivered through additional scriptural records and modern prophetic utterances.

“Him Declare I Unto You”

One of the main reasons Latter-day Saints are often relegated to the category of cult of non-Christian is because we believe in scripture beyond the Bible. To be sure, we love the Bible. We cherish its sacred teachings and delight in reading and teaching it. We seek to conform our lives to its marvelous precepts. But we do not believe that the Bible contains all that God has spoken or will yet speak in the future.

Occasionally we hear certain Latter-day Saint teachings — like some of those concerning the Savior that I have detailed earlier — described as “unbiblical” or of a particular doctrine being “contradictory” to the Bible. Let’s be clear on this matter. The Bible is one of the books within our standard works, our scriptural canon, and thus our doctrines and practices are in harmony with the Bible. There are times, of course, when latter-day revelation provides clarification of additional information to the Bible. But addition to the canon is hardly the same as rejection of the canon. Supplementation is not the same as contradiction. All of the prophets, including the Savior Himself, brought new light and knowledge to the world; in many cases, new scripture came as a result of their ministry. That new scripture did not invalidate what went before nor did it close the door on subsequent revelation.

Most New Testament scholars believe that Mark was the first Gospel written and that Matthew and Luke drew upon Mark in the preparation of their Gospels. One tradition is that John the Beloved, aware of the teaching of the synoptics, prepared his Gospel in an effort to “fill in the gaps” and thus deal more with the great spiritual verities that his evangelistic colleagues chose not to include. How many people in the Christian tradition today would suggest that what Matthew or Luke did in adding to what Mark had written was illegal or inappropriate or irreverent? Do we suppose that anyone in the first century would have so felt?

Would anyone accuse Matthew or Luke or John of writing about or even worshipping a “different Jesus” because they were bold enough to add to what had been recorded already? Surely not. Why? Because Matthew and Luke and John were inspired for God, perhaps even divinely commissioned by the church to pen their testimonies.

If Luke (in the Gospel, as well as in Acts) or John chose to write of subsequent appearance of the Lord Jesus after His ascension into heaven, appearances not found in Mark or Matthew, are we prone to criticize, to cry foul? No, because these accounts are contained in the Christian canon, that collection of books that serves as the rule of faith and practice in the Christian world.

The authority of scripture is tied to its source. From our perspective, the living, breathing, ever-relevant nature of the word of God is linked not to written words, not even to the writing of Moses or Isaiah or Malachi, not to the four Gospels or the epistles of Paul, but rather to the spirit of prophecy and revelation that illuminated and empowered those who recorded them in the first place. The Bible does in fact contain much that can and should guide our walk and talk; it contains the word and will of the Lord to men and women in earlier ages, and its timeless truths have tremendous normative value for our day. But we do not derive authority to speak or act in the name of Deity on the basis of what God gave to His people in an earlier day.

Just how bold is the Latter-day Saint claim? In a letter to his uncle Silas, Joseph Smith wrote the following:

Why should it be thought a thing incredible that the Lord should be pleased to speak again in these last days for their salvation? Perhaps you may be surprised at this assertion that I should say ‘for the salvation of his creatures in these last days’ since we have already in our possession a vast volume of his word [the Bible] which he has previously given. But you will admit that the word spoken to Noah was not sufficient or Abraham. … Isaac, the promised seed, was not required to rest his hope upon the promises made to his father Abraham, but was privileged with the assurance of [God’s] approbation in the sight of heaven by the direct voice of the Lord to him. … I have no doubt but that the holy prophets and apostles and saints in the ancient days were saved in the kingdom of God. … I may believe that Enoch walked with God. I may believe that Abraham communed with God and conversed with angels. … And have I not an equal privilege with the ancient saints? And will not the Lord hear my prayers, and listen to my cries as soon [as] he ever did to theirs, if I come to him in the manner they did? Or is he a respecter of persons?[vi]

Latter-day Saints feel a deep allegiance to the Bible. It seems odd to us, however, to be accused of being irreverent or disloyal to the Bible when we suggest to the religious world that the God of heaven has chosen to speak again. Our challenge is hauntingly reminiscent of that faced by Peter, James, John or Paul when they declared to the religious establishment of their day that God had sent new truths and new revelations into the world, truths that supplemented and even clarified the Hebrew scripture. And what was the response of the Jews of the day? “Who do you think you are?” they essentially asked. “We have the Law and the Prophets. They are sufficient.” Any effort to add to or to take away from that collection of sacred writings was suspect and subject to scorn and ridicule. And so it is today.

A Willingness to Listen and Learn

A number of years ago a colleague and I traveled with two Evangelical Christian friends to another part of the country to meet with a well-known theologian, author and pastor/teacher in that area. We had read several of his books and had enjoyed his preaching over the years. As a part of an outreach effort to better understand those of other faiths (and to assist them to understand us a little better), we have visited such institutions as Notre Dame, Catholic University, Baylor, Wheaton College and various religious colleges and seminaries. We met this particular pastor and then attended his church services on both Sunday morning and Sunday evening and in both meetings were impressed with the depth and inspiration of his preaching.

The next day we met for lunch and had a wonderful two-hour doctrinal discussion. I explained that we had no set agenda, except that we had admired his writings and wanted to meet him. We added that we had several questions we wanted to pose in order to better understand Evangelical theology. I mentioned that as the dean of religious education (at that time), I oversaw the teaching of religion to some 30,000 young people at Brigham Young University and that I felt it would be wise for me to be able to articulate properly the beliefs of our brothers and sisters of other faiths. I hoped, as well, that they might make the effort to understand our beliefs so as to represent accurately what we teach.

Early in our conversation the minister said something like: “Look, anyone knows there are big difference between us. But I don’t want to focus on those differences. Let’s talk about Christ.” We then discussed the person of Jesus, justification by faith, baptism, sanctification, salvation, heaven, hell, agency and predestination, premortal existence and a number of other fascinating topics. We compared and contrasted, we asked questions and we answered questions. In thinking back on what proved to be one of the most stimulating and worthwhile learning experiences of our lives, the one thing that characterized our discussion, and the one thing that made the biggest difference, was the mood that existed there — a mood of openness, candor and a general lack of defensiveness. We knew what we believed, and we were all committed to our own religious tradition. But we were eager to learn where the other person was coming from. (Blah, blah, blah)

This experience says something to me about what can happen when men and women of good will come together in an attitude of openness and in a sincere effort to better understand and be understood. Given the challenges we face in our society — fatherless homes, child and spouse abuse, divorce, poverty, spreading crime and delinquency — it seems so foolish for men and women who believe in God, whose hearts and lives have been surrendered to that God, to allow doctrinal differences to prevent them from working together. Okay, you believe in a triune God, that the Almighty is a spirit and that He created all things ex nihilo. I believe that God is an exalted man, that He is a separate and distinct personage from the Son and the Holy Ghost. He believes in heaven, while she believes in nirvana. She believes that the Sabbath should be observed on Saturday, while her neighbor feels that the day of corporate worship should be on Friday. This one speaks in tongues, that one spends much of his time leading marches against social injustice, while a third believes that little children should be baptized. One good Baptist is a strict Calvinist, while another tends to take freedom of the will quite seriously. And so on, and so on.

Latter-day Saints do not believe that the answer to the world’s problems is ultimately to be found in more extravagant social programs or stronger legislation. Most or[S1] all of these ills have moral or spiritual roots. In the spirit of the brotherhood and sisterhood of humankind, is it not possible to lay aside theological differences long enough to address the staggering social issues in our troubled world? My recent interactions with men and women of various faiths have had a profound impact on me; they have broadened my horizons dramatically and reminded me — a sobering reminder we all need once in a while — that we are all sons and daughters of the same Eternal Father. We may never resolve our differences on the Godhead or the Trinity, on the spiritual or corporeal nature of Deity or on the sufficiency or inerrancy of the Bible, but we can agree that there is a God; that the ultimate transformation of society will come only through the application of moral and religious solutions to pressing issues; and that the regeneration of individual hearts and souls is foundational to the restoration of virtue in our communities and nations. One need not surrender cherished religious values or doctrines in order to be a better neighbor, a more caring citizen, a more involved municipal. (So rational! So Puritan!)

In addition, we can have lively and provocative discussion on our differences, and such interactions need not be threatening, offensive or damaging to our relationships. What we cannot afford to do, if we are to communicate and cooperate, is to misrepresent one another or ascribe ulterior motives. Such measures are divisive and do not partake of that Spirit that strengthens, binds and reinforces. President Gordon B. Hinckley said of the Latter-day Saints:

We want to be good neighbors; we want to be good friends. We feel we can differ theologically with people without being disagreeable in any sense. We hope they feel the same way toward us. We have many friends and many associations with people who are not of our faith, with whom we deal constantly, and we have a wonderful relationship. It disturbs me when I hear about any antagonisms. … I don’t think they are necessary. I hope that we can overcome them.[vii]

There is, to be sure, a risk associated with learning something new about someone else. New insights always affect old perspectives, and thus some rethinking, rearranging and restructuring of our worldview are inevitable. When we look beyond a man or a woman’s color or ethnic group or social circle or church or synagogue or mosque or creed or statement of belief, when we try our best to see them for who and what they are, children of the same God, something good and worthwhile happens to us, and we are thereby drawn into a closer union with the God of us all. (Okay, okay! Just stop!)

Conclusion

Jesus Christ is the central figure in the doctrine and practice of The Church of Jesus Christ of Latter-day Saints. He is the Redeemer.[viii] He is the prototype of all saved beings, the standard of salvation.[ix] Jesus explained that “no man cometh unto the Father, but by me” (John 14:6). We acknowledge Jesus Christ as the source of truth and redemption, as the light and life of the world, as the way to the Father (John 14:6; 2 Nephi 25:29; 3 Nephi 11:11). We worship Him in that we look to Him for deliverance and redemption and seek to emulate His matchless life (D&C 93:12–20). Truly, as one Book of Mormon prophet proclaimed, “We talk of Christ, we rejoice in Christ, we preach of Christ, … that our children may know to what source they may look for a remission of their sins” (2 Nephi 25:26).

As to whether we worship a “different Jesus,” we say again: We accept and endorse the testimony of the New Testament writers. Jesus is the promised Messiah, the resurrection and the life (John 11:25), literally the light of the world (John 8:12). Everything that testifies of His divine birth, His goodness, His transforming power and His godhood, we embrace enthusiastically. But we also rejoice in the additional knowledge latter-day prophets have provided about our Lord and Savior. President Brigham Young thus declared that

we, the Latter-day Saints, take the liberty of believing more than our Christian brethren: we not only believe … the Bible, but … the whole of the plan of salvation that Jesus has given to us. Do we differ from others who believe in the Lord Jesus Christ? No, only in believing more.[x]

It is the “more” that makes many in the Christian world very nervous and usually suspicious of us. But it is the “more” that allows us to make a significant contribution in the religious world. Elder Boyd K. Packer observed: “We do not claim that others have no truth. … Converts to the Church may bring with them all the truth they possess and have it added upon.”[xi]

Knowing what I know, feeling what I feel and having experienced what I have in regard to the person and power of the Savior, it is difficult for me to be patient and loving toward those who denounce me as a non-Christian. But I am constrained to do so in the spirit of Him who also was misunderstood and misrepresented. While it would be a wonderful thing to have others acknowledge our Christianity, we do not court favor nor will we compromise our distinctiveness.

We acknowledge and value the good that is done by so many to bring the message of Jesus from the New Testament to a world that desperately needs it.

The First Presidency of the Church in 1907 made the following declaration: “Our motives are not selfish; our purposes not petty and earth-bound; we contemplate the human race, past, present and yet to come, as immortal beings, for whose salvation it is our mission to labor; and to this work, broad as eternity and deep as the love of God, we devote ourselves, now, and forever.”[xii]

Actually, it’s not some “Trinity doctrine thing” that “other Christians” care about (or know about) it’s the whacko “archaeology” of Mormon history and beliefs that put them at the top of the list of Bizarre Cult Fantasies, over and beyond those of New Age Cults and “Ancient Aliens”

Google: “Mormon Archaeology”

 

Advertisements

From the Edge of the Mormon Empire / PBS Video

Hmmm….. speaking of Puritans, few people realize that the Mormons are “renegade descendants” of those money-loving, east coast Chosen Ones: God loves Money more than he loves People! Mormons are above all, about business $$$ today. Social typicals love bat crap crazy “Money Men”

And yes, I live at the edge of the Mormon Empire…

From the Archives / Superstition, Mass Murder, Psychosis

Why am I “exposing” my thinking from many years ago? Because the frustration of “dealing with” social humans was so debilitating, that I turned to a “new” asset – writing, in order to make my unconscious internal conflict something that I could “analyze” in terms of the social structure that mystified me.

That is, I discovered that nature had equipped me with thinking skills that could unlock the prison of human self-created misery. It’s ironic, I suppose, that finally “finding” that Asperger people, by whatever “name” one calls them, do exist, and that I am one of them, has actually “softened” my opinion of social typicals; modern humans are products of their brain type and obsessive social orientation, due to “evolutionary” trends and directions that they cannot control. The same can be said for neurodiverse and neurocomplex Homo sapiens: adaptation is guided by the environment; adaptations can be temporarily positive, but fundamentally self-destructive. “Being” Asperger, and exploring what that entails, has gradually allowed me to “be myself” – and to gain insight into the advantages of cognitive detachment in understanding “humanity” – which contrary to psychologists, REQUIRES empathy – empathy that is learned and discovered by experience, and not by “magic”.  

___________________________________________________________________________________________

From the archives:

Nature exists with or without us.

The Supernatural Domain is delusional projection; therefore, it is prudent to assume that any and all human ideas and assumptions are incorrect until proven otherwise! 

The supernatural realm is a product of the human mind – and most of its contents have no correlation with physical reality. As for the content that does correspond, mathematics supplies the descriptive language that makes it possible for us to predict events and create technology that actually works. Whatever jump-started human brain power, the results have been spectacular – from hand axes to planetary probes, from clay pots to cluster bombs. Designing simple tools is fairly easy; a thrown spear either travels true or it doesn’t. Improvements can be made and easily tested until “it works.”

Human beings not only learn from each other, but we observe and copy the behavior of other animals. Useful knowledge can be extracted from nonliving sources, such as the ability of water to do work.

Responses to the environment that belong to the category of conscious thought, and which are expressed by means of language (words and symbols), I would identify as The Supernatural Realm – a kind of warehouse or holding area for ideas waiting to be tested in the physical environment. Problems arise when we fail to test ideas! 

The ability to imagine objects that simply cannot exist, such as human bodies with functional wings attached, is remarkable as a source of useful imagination and dangerous mistakes. Ideas that produce aqueducts, sanitation, medical treatments, or aircraft correlate to conditions of physical reality, and therefore move out of fantasy and into a body of real knowledge. This system of observation, along with trial and error, and the building of a catalogue of useful environmental skills is what has made human adaptation to nearly all environments on earth possible. Each generation has capitalized on the real world techniques of the ancestors, but what about the content of the supernatural that has no value as a description of reality and which if tested, fails miserably?

Ironically this lack of correlation to reality may be what makes some ideas impossible to pry loose from the majority of human minds. Some supernatural ideas can easily piggyback onto acts of force: the religion of the conqueror needs no explanation nor justification. It is imposed and brutally enforced. The fact that the human brain can accommodate mutually impossible universes leads to fantastic possibilities and enormous problems. Without self-awareness and discipline, the result is a continual battle over ideas that are utterly insubstantial, but which are pursued with the furor of blind emotion.

There is widespread belief in the supernatural as an actual place in the sky, under the earth, or all around us, existing in a dimension in which none of the familiar parameters of reality exist, and that it is inhabited by powerful beings that magically take on the physical form of people, ghosts, animals, space aliens, meddlers, mind readers, winged messengers, law givers, deliverers of punishment – who stage car wrecks (then pick and choose who will be injured or die in them), killer tornados, and volcanic eruptions. These spirits prefer to communicate via secret signs and codes which have become the obsession of many. These disembodied beings monitor and punish bad thoughts, hand out winning lottery tickets to those who pray for them, but alternately refuse “wins” to those who are equally needy and prayerful. They demand offerings of flowers, food, blood, and money and millions of lives sacrificed in wars.  

More people believe in a universe where nothing works, or can possibly work, except through the temperamental will of unseen inflated humans, than understand the simple principle of cause and effect. This failure, in a time of space probes that successfully navigate the solar system, indicates that something is functionally delusional in the human brain. The ability of our big brain to investigate the world, to imagine possible action, and to test ideas for working results is remarkable, but our inability to discard concepts that do not reflect how the world works, is bizarre and dangerous. Powerful technologies are applied without understanding how they work. The dire consequences are real. Superstition is the mistaken assignment of cause and effect. The election of leaders who are automated by supernatural ideas, and our frustration when they cannot produce results, is a disaster. The physical processes that drive reality trump all human belief. The destructive power of the richest nation on earth is handed over to a leader without a technical or science-based education, on the claim that his intentions are good and those of the enemy are evil. Does this not seem inadequate?

In the supernatural state of mind, intent guarantees results: Cause, effect, and consequences are nowhere to be seen.

Just where does sanity exist? is a question that still awaits a functional answer. As ideas are vetted and removed to a rational catalogue, which in the U.S. has become the domain of science and engineering, the supernatural realm becomes enriched in fantasy.

Unless children are taught to distinguish between the two, they merely add to a population that is increasingly unable to function. Countries that we arrogantly label as backward embrace science and engineering education. Why is that?

 

Magical Thinking / Failed Drug Policy

The War on Dugs: result – an exponential increase in drug use, drug trafficking, drug-related crime, drug-related incarceration, prescription drug abuse by doctors and patients.

ondcp-infographicv2

Drug Reform Policy demonstrates the disaster of magical thinking: The above policy statements belong to the supernatural dimension, that is, they are concepts devoid of concrete direction and application and rely on the belief that words create reality. These statements of policy are not new; we have heard these pronouncements over and over, handed down like the Ten Commandments by an elite ruling class who are convinced that their abstractions have the weight of divine power. In fact, the reliance on meaningless verbiage has resulted in catastrophe.

To begin with, the basis for policy is unproven.

1. Drug use is caused by a lack of education. Really? Schools have been inundated with information about drugs for 40 years. How’s that working?

2. Expand access to treatment? Establish more rehab programs like the ones that we already have, which create a profitable cycle of failure and return to rehab for the addict? Or the default ‘treatment program’ of incarceration?

3. Reform the Criminal Justice system? This is outrageous. The Criminal Justice system is one of the primary vehicles of social engineering in the U.S. Why would the elite who profit from selective application of ‘justice’ wish to change it?

4. Support? With money, a pat on the back, a brochure, a poster, a speech, a photo op? Lift the stigma? Words, words, words. Support is one of those words that is popular because it’s a vague substitute for concrete action.

money-war-drugs-infographic-11-30

change-in-us-incarceration-rate

The most popular  drug “treatment” program in the U.S. is incarceration.

What is going on with policymakers?

1. The ‘policymakers’ suffer from magical thinking, which is in this case, the belief that INTENT is sufficient to produce a result. It’s the Abracadabra effect. Words communicate intent, therefore words create real results.

2. The policymakers are cynical. They don’t believe in what they are saying, and write nice progressive policy while laughing at dumb citizens.

3. The War on Drugs is big business: individuals, corporations, lobbyists, consultants and contractors are sharing a boatload of cash: 15.6 billion dollars / year. Why would anyone involved want to end the war?

The war on poverty is the perfect partner in social engineering, and like the war on drugs, policy has created a permanent poverty class.

 aclu-imprisonment

 

 

Extraverted – Introverted Thinking / Ask C.G. Jung

Hmmm.. back to the library after 3 days with no access to the Internet; interesting experience. Anyway – had to go old school – actual books, pen and paper. Very productive, if frustrating. I’ve been meaning to get back to a question on my mind: What did Jung actually mean by extraverted and introverted thinking?

My suspicion was that most of us are using these terms wrongly, and confusing related terms such as intuition, instinct, “gut feeling” “sense of” “hunch” – a quick inspection of The Portable Jung, Viking Press, 1972 (one of those reference books I keep close), confirmed that indeed, my “memory” of these ideas and others was somewhat confused.  Also, I had not reviewed the subject in light of what I now know about Asperger’s – and found that Jung’s ideas have new importance.

Remember: the following is extraversion and introversion applied to THINKING ONLY, not to the personality as a whole.

I will begin with one quote: (page 197, should you have a copy) regarding extraverted thinking:

“…but when the thinking depends primarily not on objective data but on some second- hand idea, the very poverty of this thinking is compensated by an all the more impressive accumulation of facts (or data) congregating round a narrow and sterile point of view, with the result that many valuable and meaningful aspects are completely lost sight of. Many of the allegedly scientific outpourings of our own day owe their existence to this wrong orientation.”

Pretty prescient warning for someone writing nearly a century ago, and including his own profession!

Jung is not condemning extraverted thinking here – far from it, but is warning against it’s mistaken or perverted use in areas that are properly the domain of introverted thinking.

A definition: The general attitude of extraverted thinking is oriented by the object and objective data.

A definition: Introverted thinking is neither directed at objective facts nor general ideas. He asks – “Is this even thinking?” This has significant application to the “Asperger” brain problem – Jung seems to have been peripherally aware of “visual thinking” in dream imagery and symbols in art and alchemy, and yet unable to “see” visual thinking as a distinct brain process, and its importance.

His admission is that both types of thinking are vital to each other, and that the failure of “our age” is that modern western culture “only acknowledges extraverted thinking” – the failure is to recognize that introverted thinking (basically, reflection on personal subjective experience) cannot be “removed” from human thought – nor should it be, because only this co-operative analysis can yield actionable meaning.

He rightly identifies the “problem” of modern “social – psychological” science as a not-really-scientific endeavor, because it does not deal with fact, but with traditional, common, banal ideas – as its “outside sources” – (Biblical Myth, Puritanical social order, etc) and inevitably, simply supports the status quo: it is “purely imitative”, an “afterthought”; repeating “sterile” ideas that cannot go beyond was was obvious to begin with. A “materialistic mentality stuck on the object” that produces a “mass of undigested material” that requires “some simple, general idea that gives coherence to a disordered whole.”

Is this not exactly, in post after post, what my repeated criticism of today’s “helping, caring, fixing” industry has been? YES!

Much more to come…..

How Animals Think / Review of Book by Frans de Waal

How Animals Think

A new look at what humans can learn from nonhuman minds

Alison Gopnik, The Atlantic 

Review of: Are We Smart Enough to Know How Smart Animals Are?

By Frans de Waal / Norton

For 2,000 years, there was an intuitive, elegant, compelling picture of how the world worked. It was called “the ladder of nature.” In the canonical version, God was at the top, followed by angels, who were followed by humans. Then came the animals, starting with noble wild beasts and descending to domestic animals and insects. Human animals followed the scheme, too. Women ranked lower than men, and children were beneath them. The ladder of nature was a scientific picture, but it was also a moral and political one. It was only natural that creatures higher up would have dominion over those lower down. (This view remains dominant in American thinking: “The Great Chain of Being” is still with us and underlies social reality)

Darwin’s theory of evolution by natural selection delivered a serious blow to this conception. (Unless one denies evolution)  Natural selection is a blind historical process, stripped of moral hierarchy. A cockroach is just as well adapted to its environment as I am to mine. In fact, the bug may be better adapted—cockroaches have been around a lot longer than humans have, and may well survive after we are gone. But the very word evolution can imply a progression—New Agers talk about becoming “more evolved”—and in the 19th century, it was still common to translate evolutionary ideas into ladder-of-nature terms.

MAN ILLUS

Modern biological science has in principle rejected the ladder of nature. But the intuitive picture is still powerful. In particular, the idea that children and nonhuman animals are lesser beings has been surprisingly persistent. Even scientists often act as if children and animals are defective adult humans, defined by the abilities we have and they don’t. Neuroscientists, for example, sometimes compare brain-damaged adults to children and animals.

We always should have been suspicious of this picture, but now we have no excuse for continuing with it. In the past 30 years, research has explored the distinctive ways in which children as well as animals think, and the discoveries deal the coup de grâce to the ladder of nature. (Not in psychology!)The primatologist Frans de Waal has been at the forefront of the animal research, and its most important public voice.

In Are We Smart Enough to Know How Smart Animals Are?, he makes a passionate and convincing case for the sophistication of nonhuman minds.

De Waal outlines both the exciting new results and the troubled history of the field. The study of animal minds was long divided between what are sometimes called “scoffers” and “boosters.” Scoffers refused to acknowledge that animals could think at all: Behaviorism—the idea that scientists shouldn’t talk about minds, only about stimuli and responses—stuck around in animal research long after it had been discredited in the rest of psychology. (Are you kidding? “Black Box” psychology is alive and well, especially in American education!) Boosters often relied on anecdotes and anthropomorphism instead of experiments. De Waal notes that there isn’t even a good general name for the new field of research. Animal cognition ignores the fact that humans are animals too. De Waal argues for evolutionary cognition instead.

Psychologists often assume that there is a special cognitive ability—a psychological secret sauce—that makes humans different from other animals. The list of candidates is long: tool use, cultural transmission, the ability to imagine the future or to understand other minds, and so on. But every one of these abilities shows up in at least some other species in at least some form. De Waal points out various examples, and there are many more. New Caledonian crows make elaborate tools, shaping branches into pointed, barbed termite-extraction devices. A few Japanese macaques learned to wash sweet potatoes and even to dip them in the sea to make them more salty, and passed that technique on to subsequent generations. Western scrub jays “cache”—they hide food for later use—and studies have shown that they anticipate what they will need in the future, rather than acting on what they need now.

From an evolutionary perspective, it makes sense that these human abilities also appear in other species. After all, the whole point of natural selection is that small variations among existing organisms can eventually give rise to new species. Our hands and hips and those of our primate relatives gradually diverged from the hands and hips of common ancestors. It’s not that we miraculously grew hands and hips and other animals didn’t. So why would we alone possess some distinctive cognitive skill that no other species has in any form?

De Waal explicitly rejects the idea that there is some hierarchy of cognitive abilities. (Thank-you!) Nevertheless, an implicit tension in his book shows just how seductive the ladder-of-nature view remains. Simply saying that the “lower” creatures share abilities with creatures once considered more advanced still suggests something like a ladder—it’s just that chimps or crows or children are higher up than we thought. So the summary of the research ends up being: We used to think that only adult humans could use tools/participate in culture/imagine the future/understand other minds, but actually chimpanzees/crows/toddlers can too. Much of de Waal’s book has this flavor, though I can’t really blame him, since developmental psychologists like me have been guilty of the same rhetoric.

As de Waal recognizes, a better way to think about other creatures would be to ask ourselves how different species have developed different kinds of minds to solve different adaptive problems. (And – How “different humans” have done, and continue to do, the same!) Surely the important question is not whether an octopus or a crow can do the same things a human can, but how those animals solve the cognitive problems they face, like how to imitate the sea floor or make a tool with their beak. Children and chimps and crows and octopuses are ultimately so interesting not because they are mini-mes, but because they are aliens—not because they are smart like us, but because they are smart in ways we haven’t even considered. All children, for example, pretend with a zeal that seems positively crazy; if we saw a grown-up act like every 3-year-old does, we would get him to check his meds. (WOW! Nasty comment!)

Sometimes studying those alien ways of knowing can illuminate adult-human cognition. Children’s pretend play may help us understand our adult taste for fiction. De Waal’s research provides another compelling example. We human beings tend to think that our social relationships are rooted in our perceptions, beliefs, and desires, and our understanding of the perceptions, beliefs, and desires of others—what psychologists call our “theory of mind.” (And yet horrible behavior toward other humans and animals demonstrates that AT BEST, this “mind-reading” simply makes humans better social manipulators and predators) human behavior our In the ’80s and ’90s, developmental psychologists, including me, showed that preschoolers and even infants understand minds apart from their own. But it was hard to show that other animals did the same. “Theory of mind” became a candidate for the special, uniquely human trick. (A social conceit)

Yet de Waal’s studies show that chimps possess a remarkably developed political intelligence—they are profoundly interested in figuring out social relationships such as status and alliances. (A primatologist friend told me that even before they could stand, the baby chimps he studied would use dominance displays to try to intimidate one another.) It turns out, as de Waal describes, that chimps do infer something about what other chimps see. But experimental studies also suggest that this happens only in a competitive political context. The evolutionary anthropologist Brian Hare and his colleagues gave a subordinate chimp a choice between pieces of food that a dominant chimp had seen hidden and other pieces it had not seen hidden. The subordinate chimp, who watched all the hiding, stayed away from the food the dominant chimp had seen, but took the food it hadn’t seen. (A typical anecdotal factoid that proves nothing)

Anyone who has gone to an academic conference will recognize that we, too, are profoundly political creatures. We may say that we sign up because we’re eager to find out what our fellow Homo sapiens think, but we’re just as interested in who’s on top and where the alliances lie. Many of the political judgments we make there don’t have much to do with our theory of mind. We may defer to a celebrity-academic silverback even if we have no respect for his ideas. In Jane Austen, Elizabeth Bennet cares how people think, while Lady Catherine cares only about how powerful they are, but both characters are equally smart and equally human.

The challenge of studying creatures that are so different from us is to get into their heads.

Of course, we know that humans are political, but we still often assume that our political actions come from thinking about beliefs and desires. Even in election season we assume that voters figure out who will enact the policies they want, and we’re surprised when it turns out that they care more about who belongs to their group or who is the top dog. The chimps may give us an insight into a kind of sophisticated and abstract social cognition that is very different from theory of mind—an intuitive sociology rather than an intuitive psychology.

Until recently, however, there wasn’t much research into how humans develop and deploy this kind of political knowledge—a domain where other animals may be more cognitively attuned than we are. It may be that we understand the social world in terms of dominance and alliance, like chimps, but we’re just not usually as politically motivated as they are. (Obsession with social status is so pervasive, that it DISRUPTS neurotypical ability to function!) Instead of asking whether we have a better everyday theory of mind, we might wonder whether they have a better everyday theory of politics.

Thinking seriously about evolutionary cognition may also help us stop looking for a single magic ingredient that explains how human intelligence emerged. De Waal’s book inevitably raises a puzzling question. After all, I’m a modern adult human being, writing this essay surrounded by furniture, books, computers, art, and music—I really do live in a world that is profoundly different from the world of the most brilliant of bonobos. If primates have the same cognitive capacities we do, where do those differences come from?

The old evolutionary-psychology movement argued that we had very specific “modules,” special mental devices, that other primates didn’t have. But it’s far likelier that humans and other primates started out with relatively minor variations in more-general endowments and that those variations have been amplified over the millennia by feedback processes. For example, small initial differences in what biologists call “life history” can have big cumulative effects. Humans have a much longer childhood than other primates do. Young chimps gather as much food as they consume by the time they’re 5. Even in forager societies, human kids don’t do that until they’re 15. This makes being a human parent especially demanding. But it also gives human children much more time to learn—in particular, to learn from the previous generation. (If that generation is “messed up” to the point of incompetence, the advantage disappears and disaster results – which is what we see in the U.S. today). Other animals can absorb culture from their forebears too, like those macaques with their proto-Pringle salty potatoes. But they may have less opportunity and motivation to exercise these abilities than we do.

Even if the differences between us and our nearest animal relatives are quantitative rather than qualitative—a matter of dialing up some cognitive capacities and downplaying others—they can have a dramatic impact overall. A small variation in how much you rely on theory of mind to understand others as opposed to relying on a theory of status and alliances can exert a large influence in the long run of biological and cultural evolution.

Finally, de Waal’s book prompts some interesting questions about how emotion and reason mix in the scientific enterprise. The quest to understand the minds of animals and children has been a remarkable scientific success story. It inevitably has a moral, and even political, dimension as well. The challenge of studying creatures that are so different from us is to get into their heads, to imagine what it is like to be a bat or a bonobo or a baby. A tremendous amount of sheer scientific ingenuity is required to figure out how to ask animals or children what they think in their language instead of in ours.

At the same time, it also helps to have a sympathy for the creatures you study, a feeling that is not far removed from love. And this sympathy is bound to lead to indignation when those creatures are dismissed or diminished. That response certainly seems justified when you consider the havoc that the ladder-of-nature picture has wrought on the “lower” creatures. (Just ask ASD and Asperger children how devastating this lack of “empathy” on the part of the “helping, caring fixing” industry is.)

But does love lead us to the most-profound insights about another being, or the most-profound illusions? Elizabeth Bennet and Lady Catherine would have differed on that too, and despite all our theory-of-mind brilliance, (sorry – that’s ridiculous optimism) we humans have yet to figure out when love enlightens and when it leads us astray. So we keep these emotions under wraps in our scientific papers, for good reason. Still, popular books are different, and both sympathy and indignation are in abundant supply in de Waal’s.

Perhaps the combination of scientific research and moral sentiment can point us to a different metaphor for our place in nature. Instead of a ladder, we could invoke the 19th-century naturalist Alexander von Humboldt’s web of life. We humans aren’t precariously balanced on the top rung looking down at the rest. (Tell that to all those EuroAmerican males who dictate socio-economic-scientific terms of “humans who count”) It’s more scientifically accurate, and more morally appealing, to say that we are just one strand in an intricate network of living things.

About the Author

Alison Gopnik is a professor of psychology and an affiliate professor of philosophy at UC Berkeley.

Days of Relief / Ignoring the Social Condemnation of Asperger’s

The past few days I’ve been ignoring Asperger’s, the “social disease” as characterized by psychologists (and their misuse of “neuroscience” to “prove” their ugly prejudices) because I decided to finally revamp my blog (formerly Some People are Lost – now Miss America Gone Wrong) and have been taken back in time to a productive period, when I began to discover myself as a person that I could like.

MAGW is important to me because it was written (1991-1992) when I didn’t know that the “condition” existed. Asperger’s was “created” around that time, and until very recently, females were excluded, mainly because male psychologists (and most males) dismiss females when it comes to “brain abilities” in engineering, math and the sciences. Women can be “biology types” because – they have uteruses. Ironically, most psychologists are female today, which is not a “compliment” to the field. Whenever a job category is overtaken by women, it means that the field has lost status and that the pay scale has dropped.

In 1991 I was in graduate school, serving time in the academic Gulag run by male assholes. It’s that simple. I finally and totally rebelled over bad treatment, and frankly, the overt hatred of females that I’d “put up with” my entire life.

When I googled “recent research” in Asperger’s this morning, the same old crap appeared – an onslaught of “studies” that claim to prove that Asperger people are robotic deviants; fictitious claims that the “bounty hunters” are closing in on the brain defects and genetic mistakes that make us social outcasts.

No one seems to even raise the question as to why being “hyposocial” and intelligent is considered to be a state of pathology – literally a “social crime” being misrepresented as biological pathology.

Why must each and every Asperger-type individual begin life as a “broken” human? And, once labeled, no matter how well we manage to survive in a hostile social environment, we can never prove that we are a legitimate type of Homo sapiens. We are guilty, and remain guilty of a social crime, without the opportunity to prove our status as “part of” our species. We are literally considered to be lower than chimps, monkeys, rats and mice on the mystical supernatural and magical “empathy scale” – which somehow is granted the “new definition” of what is “required” to be considered a “real” human being.

My “escape” from social tyranny twenty-six years ago was fueled by disgust –  I had no intention other than relief for a few weeks before I again would have to take on survival in “American social reality”.

Surprise! It was the happiest time in my life. I began to uncover the “me” that was buried under a lifetime of “being told who I was” – and I liked the person who began to be revealed as I left behind the social order that classifies, defines and injures human beings. The people I met were often in the “same boat” (or RV, tent or car) as myself: refugees from a cruel and unjust economic and social system that had kicked them to the curb – and declared them to have no value.

What is disturbing, is that this system has grown in strength and callous brutality  over the past three decades.

 

 

 

 

 

DUKE U. “Autism” Study Paid for by Dept. of Defense / TOTAL BS

 THIS IS NOT SCIENCE !

Comment: Accessed via an entry under “NEWS” on the Dept. of Defense CDMRP Autism Research “Highlights” for 2013. Can we assume that DOD “funded” this particular DUKE study? On Duke’s website, under funding opportunities – (search CDMRP) there are dozens of grants offered in many categories. So – assuming this study was funded by DOD-CDMRP, what does it have to do with autism research?

And if it is some “arbitrarily” selected “news” – why is it important enough to be listed under “Highlights” for 2013 – and what does it have to do with autism research?

Note: “Weasel Words” in green

Decision to Give a Group Effort in the Brain

Monkeys find some reward in giving, even though they prefer to receive

A monkey would probably never agree that it is better to give than to receive, but they do apparently get some reward from giving to another monkey.

During a task in which rhesus macaques had control over whether they or another monkey would receive a squirt of fruit juice, three distinct areas of the brain were found to be involved in weighing benefits to oneself against benefits to the other, according to new research by Duke University researchers. TOTAL BS ASSUMPTION. There is no way to prove that monkeys formulate “social concepts” by which to make decisions about their behavior. This attributes a level and type of human cognition PROJECTED onto the monkey brain, and that this “quality of concept formation” and evaluation can be detected by neuronal activity. TOTAL fantasy!

The team used sensitive electrodes to detect the activity of individual neurons as the animals weighed different scenarios, such as whether to reward themselves, the other monkey or nobody at all. So – the monkeys “told the researchers” that this is what they were doing? HOW? Three areas of the brain were seen to weigh the problem differently depending on the social context of the reward. The research appears Dec. 24 in the journal Nature Neuroscience. What this reveals is magical thinking on the part of the researchers!

Using a computer screen to allocate juice rewards, the monkeys preferred to reward themselves first and foremost. But they also chose to reward the other monkey when it was either that or nothing for either of them. They also were more likely to give the reward to a monkey they knew over one they didn’t, preferred to give to lower status than higher status monkeys, and had almost no interest in giving the juice to an inanimate object.

Calculating the social aspects of the reward system (OMG!) seems to be a combination of action by two centers involved in calculating all sorts of rewards and a third center that adds the social dimension, according to lead researcher Michael Platt, director of the Duke Institute for Brain Sciences and the Center for Cognitive Neuroscience.

Comment: If one wanted to “make up” a study that exemplified the crisis of pseudoscience in American research – this one would serve as the perfect template.

The orbital frontal cortex, right above the eyes, was activated (would also be activated if you hit it with a baseball bat) when calculating rewards to the self. The anterior cingulate sulcus in the middle of the top of the brain seemed to calculate giving up a reward. But both centers appear divorced from social context,” Platt said. A third area, the anterior cingulate gyrus (ACCg), seemed to “care a lot about what happened to the other monkey,” Platt said.

Comment: So, “brain parts” or even a few neurons, have the capacity “to care” (have empathy, compassion and awareness of “social concepts”) You’ve got to be kidding!

Based on results of various combinations of the reward-giving scenario the monkeys were put through, it would appear that neurons in the ACCg encode both the giving and receiving of rewards, and do so in a remarkably similar way.

The use of single-neuron electrodes to measure the activity of brain areas gives a much more precise picture than brain imaging, Platt said. Even the best imaging available now is “a six-second snapshot of tens of thousands of neurons,” which are typically operating in milliseconds.

Comment: Technological “advance” in detection DOES NOT magically “cure” or improve the idiotic thinking of researchers.

What the team has seen happening is consistent with other studies of damaged ACCg regions in which animals lost their typical hesitation about retrieving food when facing social choices. This same region of the brain is active in people when they empathize with someone else.

OMG! Will someone please send DUKE “BS” detectors to install in their labs???

“Many neurons in the anterior cingulate gyrus (ACCg) respond both when monkeys choose a drink for themselves and when they choose to give a drink to another monkey,” Platt said. (Then how do you differentiate these two responses?) “One might view these as sort of mirror neurons for the reward system.” The region is active as an animal merely watches another animal receiving a reward without having one themselves.

The research is another piece of the puzzle as neuroscientists search for the roots of charity and social behavior in our species and others. (This is not science) There have been two schools of thought about how the social reward system is set up, Platt said. One holds that there is generic circuitry for rewards that has been adapted to our social behavior because it helped humans and other social animals like monkeys thrive. Another school holds that social behavior is so important to humans and other highly social animals like monkeys that there may be some special circuits for it, Platt said.

This finding, in macaques that have only a very distant common ancestor with us and are “not a particularly prosocial animal,” suggests that “this specialized social circuitry evolved a long time ago presumably to support cooperative behavior,” Platt said.

The research was supported by grants from the Ruth K. Broad Biomedical Foundation, Canadian Institutes of Health Research, National Institute of Mental Health (MH095894), and Department of Defense (W81XWH-11-1-0584). There it is!

And still we must ask, What does this have to do with Autism?

CITATION: “Neuronal reference frames for social decisions in primate frontal cortex,” Steve W.C. Chang, Jean-François Gariépy, Michael L. Platt. Nature Neuroscience, Dec. 24, 2012. Doi: 10.1038/nn.3287

Do Statistics Lie? Yes They Do / 3 Articles Explain HOW AND WHO

This scandalous practice of deceit-for-funding-and-profit is why I persist in slamming psychology as “not science”

It’s not only that these are research scams that waste funding and devalue science;  human beings are harmed as a result from this abuse of statistics. Asperger and neurodiverse types are being “defined” as “defective” human beings: there is no scientific basis for this “socially-motivated” construct. The current Autism-ASD-Asperger Industry is a FOR PROFIT INDUSTRY that exploits individuals, their families, schools, communities, tax-payers and funding for research. It also serves to enforce “the social order” dictated by elites.

The Mind-Reading Salmon: The True Meaning of Statistical Significance

By Charles Seife on August 1, 2011 16

If you want to convince the world that a fish can sense your emotions, only one statistical measure will suffice: the p-value.

The p-value is an all-purpose measure that scientists often use to determine whether or not an experimental result is “statistically significant.” Unfortunately, sometimes the test does not work as advertised, and researchers imbue an observation with great significance when in fact it might be a worthless fluke.

Say you’ve performed a scientific experiment testing a new heart attack drug against a placebo. At the end of the trial, you compare the two groups. Lo and behold, the patients who took the drug had fewer heart attacks than those who took the placebo. Success! The drug works!

Well, maybe not. There is a 50 percent chance that even if the drug is completely ineffective, patients taking it will do better than those taking the placebo. (After all, one group has to do better than the other; it’s a toss-up whether the drug group or placebo group will come up on top.)

The p-value puts a number on the effects of randomness. It is the probability of seeing a positive experimental outcome even if your hypothesis is wrong. A long-standing convention in many scientific fields is that any result with a p-value below 0.05 is deemed statistically significant. An arbitrary convention, it is often the wrong one. When you make a comparison of an ineffective drug to a placebo, you will typically get a statistically significant result one time out of 20. And if you make 20 such comparisons in a scientific paper, on average, you will get one signif­icant result with a p-value less than 0.05—even when the drug does not work.

Many scientific papers make 20 or 40 or even hundreds of comparisons. In such cases, researchers who do not adjust the standard p-value threshold of 0.05 are virtually guaranteed to find statistical significance in results that are meaningless statistical flukes. A study that ran in the February issue of the American Journal of Clinical Nutrition tested dozens of compounds and concluded that those found in blueberries lower the risk of high blood pressure, with a p-value of 0.03. But the researchers looked at so many compounds and made so many comparisons (more than 50), that it was almost a sure thing that some of the p-values in the paper would be less than 0.05 just by chance.

The same applies to a well-publicized study that a team of neuroscientists once conducted on a salmon. When they presented the fish with pictures of people expressing emotions, regions of the salmon’s brain lit up. The result was statistically signif­icant with a p-value of less than 0.001; however, as the researchers argued, there are so many possible patterns that a statistically significant result was virtually guaranteed, so the result was totally worthless. p-value notwithstanding, there was no way that the fish could have reacted to human emotions. The salmon in the fMRI happened to be dead.

________________________________

Statistical Significance Abuse

A lot of research makes scientific evidence seem more “significant” than it is

updated Sep 15, 2016 (first published 2011) by Paul Ingraham, Vancouver, Canada 

I am a science writer and a former Registered Massage Therapist with a decade of experience treating tough pain cases. I was the Assistant Editor of ScienceBasedMedicine.org for several years.

SUMMARY

Many study results are called “statistically significant,” giving unwary readers the impression of good news. But it’s misleading: statistical significance means only that the measured effect of a treatment is probably real (not a fluke). It says nothing about how large the effect is. Many small effect sizes are reported only as “statistically significant” — it’s a nearly standard way for biased researchers to make it found like they found something more important than they did.

This article is about two common problems with “statistical significance” in medical research. Both problems are particularly rampant in the study of massage therapy, chiropractic, and alternative medicine in general, and are wonderful examples of why science is hard, “why most published research findings are false” and genuine robust treatment effects are rare:

  1. mixing up statistical and clinical significance and the probability of being “right”
  2. reporting statistical significance of the wrong dang thing

Significance Problem #1 Two flavours of “significant”: statistical versus clinical

Research can be statistically significant, but otherwise unimportant. Statistical significance means that data signifies something… not that it actually matters.

Statistical significance on its own is the sound of one hand clapping. But researchers often focus on the the positive: “Hey, we’ve got statistical significance! Maybe!” So they summarize their findings as “significant” without telling us the size of the effect they observed, which is a little devious or sloppy. Almost everyone is fooled by this — except 98% of statisticians — because the word “significant” carries so much weight. It really sounds like a big deal, like good news. But it’s like bragging about winning a lottery without mentioning that you only won $25.

Statistical significance without other information really doesn’t mean all that much. It is not only possible but common to have clinically trivial results that are nonetheless statistically significant. How much is that statistical significance is worth? It depends … on details that are routinely omitted; which is convenient if you’re pushing a pet theory, isn’t it?

Imagine a study of a treatment for pain, which has a statistically significant effect, but it’s a tiny effect: that is, it only reduces pain slightly. You can take that result to the bank (supposedly) — it’s real! It’s statistically significant! But … no more so than a series of coin flips that yields enough heads in a row to raise your eyebrows. And the effect was still tiny. So calling these results “significant” is using math to put lipstick on a pig.

There are a lot of decorated pigs in research: “significant” results that are possibly not even that, and clinically boring in any case.

Just because a published paper presents a statistically significant result does not mean it necessarily has a biologically meaningful effect.

++++++++++++++++++++++++++++++++

Science Left Behind: Feel-Good Fallacies and the Rise of the Anti-Scientific Left, Alex Berezow & Hank Campbell

If you torture data for long enough, it will confess to anything.

P-values, where P stands for “please stop the madness”

Small study proves showers work Too often people smugly dismiss a study just because of small sample size, ignoring all other considerations, like effect size … a rookie move. For instance, you really do not need to test lots of showers to prove that they are an effective moistening procedure. The power of a study is a product of both sample and effect size (and more).

Statistical significance is boiled down to one convenient number: the infamous, cryptic, bizarro and highly over-rated P-value. Cue Darth vader theme. This number is “diabolically difficult” to understand and explain, and so p-value illiteracy and bloopers are epidemic (Goodman identifies ““A dirty dozen: twelve p-value misconceptions””4). It seems to be hated by almost everyone who actually understands it, because almost no one else does. Many believe it to be a blight on modern science.5 Including the American Statistical Association — and if they don’t like it, should you?

The mathematical soul of the p-value is, frankly, not really worth knowing. It’s just not that fantastic an idea. The importance of scientific research results cannot be jammed into a single number (and nor was that ever the intent). And so really wrapping your head around it no more important than learning the gritty details of the Rotten Tomatoes algorithm when you’re trying to decide whether to see that new Godzilla (2014) movie.7

What you do need to know is the role that p-values play in research today. You need to know that “it depends” is a massive understatement, and that there are “several reasons why the p-value is an unobjective and inadequate measure of evidence”8 Because it is so often abused, it’s way more important to know what the p-value is NOT than what it IS. For instance, it’s particularly useless when applied to studies of really outlandish ideas. And yet it’s one of the staples of pseudoscience, because it is such an easy way to make research look better than it is.

Above all, a good p-value is not a low chance that the results were a fluke or false alarm — which is by far the most common misinterpretation (and the first of Goodman’s Dirty Dozen). The real definition is a kind of mirror image of that:11 it’s not a low chance of a false alarm, but a low chance of an effect that actually is a false alarm. The false alarm is a given! That part of the equation is already filled in, the premise of every p-value. For better or worse, the p-value is the answer to this question: if there really is nothing going on here, what are the odds of getting these results? A low number is encouraging, but it doesn’t say the results aren’t a fluke, because it can’t — it was calculated by assuming they are.

The only way to actually find out if the effect is real or a fluke is to do more experiments. If they all produce results that would be unlikely if there was no real effect, then you can say the results are probably real. The p-value alone can only be a reason to check again — not statistical congratulations on a job well done. And yet that’s exactly how most researchers use it. And most science journalists.

The problem with p-values

Academic psychology and medical testing are both dogged by unreliability. The reason is clear: we got probability wrong

The aim of science is to establish facts, as accurately as possible. It is therefore crucially important to determine whether an observed phenomenon is real, or whether it’s the result of pure chance. If you declare that you’ve discovered something when in fact it’s just random, that’s called a false discovery or a false positive. And false positives are alarmingly common in some areas of medical science. 

In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’, focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations.

For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?

The problem of how to distinguish a genuine observation from random chance is a very old one. It’s been debated for centuries by philosophers and, more fruitfully, by statisticians. It turns on the distinction between induction and deduction. Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask. 

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what would be expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

Confusion between these two quite different probabilities lies at the heart of why p-values are so often misinterpreted. It’s called the error of the transposed conditional. Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong.

Suppose, for example, that you give a pill to each of 10 people. You measure some response (such as their blood pressure). Each person will give a different response. And you give a different pill to 10 other people, and again get 10 different responses. How do you tell whether the two pills are really different?

The conventional procedure would be to follow Fisher and calculate the probability of making the observations (or the more extreme ones) if there were no true difference between the two pills. That’s the p-value, based on deductive reasoning. P-values of less than 5 per cent have come to be called ‘statistically significant’, a term that’s ubiquitous in the biomedical literature, and is now used to suggest that an effect is real, not just chance.

But the dichotomy between ‘significant’ and ‘not significant’ is absurd. There’s obviously very little difference between the implication of a p-value of 4.7 per cent and of 5.3 per cent, yet the former has come to be regarded as success and the latter as failure. And ‘success’ will get your work published, even in the most prestigious journals. That’s bad enough, but the real killer is that, if you observe a ‘just significant’ result, say P = 0.047 (4.7 per cent) in a single test, and claim to have made a discovery, the chance that you are wrong is at least 26 per cent, and could easily be more than 80 per cent. How can this be so?

For one, it’s of little use to say that your observations would be rare if there were no real difference between the pills (which is what the p-value tells you), unless you can say whether or not the observations would also be rare when there is a true difference between the pills. Which brings us back to induction.

The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). But how to use his famous theorem in practice has been the subject of heated debate ever since.

Take the proposition that the Earth goes round the Sun. It either does or it doesn’t, so it’s hard to see how we could pick a probability for this statement. Furthermore, the Bayesian conversion involves assigning a value to the probability that your hypothesis is right before any observations have been made (the ‘prior probability’). Bayes’s theorem allows that prior probability to be converted to what we want, the probability that the hypothesis is true given some relevant observations, which is known as the ‘posterior probability’.

These intangible probabilities persuaded Fisher that Bayes’s approach wasn’t feasible. Instead, he proposed the wholly deductive process of null hypothesis significance testing. The realisation that this method, as it is commonly used, gives alarmingly large numbers of false positive results has spurred several recent attempts to bridge the gap.  

There is one uncontroversial application of Bayes’s theorem: diagnostic screening, the tests that doctors give healthy people to detect warning signs of disease. They’re a good way to understand the perils of the deductive approach.

In theory, picking up on the early signs of illness is obviously good. But in practice there are usually so many false positive diagnoses that it just doesn’t work very well. Take dementia. Roughly 1 per cent of the population suffer from mild cognitive impairment, which might, but doesn’t always, lead to dementia. Suppose that the test is quite a good one, in the sense that 95 per cent of the time it gives the right (negative) answer for people who are free of the condition. That means that 5 per cent of the people who don’t have cognitive impairment will test, falsely, as positive. That doesn’t sound bad. It’s directly analogous to tests of significance which will give 5 per cent of false positives when there is no real effect, if we use a p-value of less than 5 per cent to mean ‘statistically significant’.

But in fact the screening test is not good – it’s actually appallingly bad, because 86 per cent, not 5 per cent, of all positive tests are false positives. So only 14 per cent of positive tests are correct. This happens because most people don’t have the condition, and so the false positives from these people (5 per cent of 99 per cent of the people), outweigh the number of true positives that arise from the much smaller number of people who have the condition (80 per cent of 1 per cent of the people, if we assume 80 per cent of people with the disease are detected successfully). There’s a YouTube video of my attempt to explain this principle, or you can read my recent paper on the subject.

Notice, though, that it’s possible to calculate the disastrous false-positive rate for screening tests only because we have estimates for the prevalence of the condition in the whole population being tested. This is the prior probability that we need to use Bayes’s theorem. If we return to the problem of tests of significance, it’s not so easy. The analogue of the prevalence of disease in the population becomes, in the case of significance tests, the probability that there is a real difference between the pills before the experiment is done – the prior probability that there’s a real effect. And it’s usually impossible to make a good guess at the value of this figure.

An example should make the idea more concrete. Imagine testing 1,000 different drugs, one at a time, to sort out which works and which doesn’t. You’d be lucky if 10 per cent of them were effective, so let’s proceed by assuming a prevalence or prior probability of 10 per cent.  Say we observe a ‘just significant’ result, for example, a P = 0.047 in a single test, and declare that this is evidence that we have made a discovery. That claim will be wrong, not in 5 per cent of cases, as is commonly believed, but in 76 per cent of cases. That is disastrously high. Just as in screening tests, the reason for this large number of mistakes is that the number of false positives in the tests where there is no real effect outweighs the number of true positives that arise from the cases in which there is a real effect.

In general, though, we don’t know the real prevalence of true effects. So, although we can calculate the p-value, we can’t calculate the number of false positives. But what we can do is give a minimum value for the false positive rate. To do this, we need only assume that it’s not legitimate to say, before the observations are made, that the odds that an effect is real are any higher than 50:50. To do so would be to assume you’re more likely than not to be right before the experiment even begins.

If we repeat the drug calculations using a prevalence of 50 per cent rather than 10 per cent, we get a false positive rate of 26 per cent, still much bigger than 5 per cent. Any lower prevalence will result in an even higher false positive rate.

The upshot is that, if a scientist observes a ‘just significant’ result in a single test, say P = 0.047, and declares that she’s made a discovery, that claim will be wrong at least 26 per cent of the time, and probably more.

No wonder then that there are problems with reproducibility in areas of science that rely on tests of significance.

What is to be done? For a start, it’s high time that we abandoned the well-worn term ‘statistically significant’. The cut-off of P < 0.05 that’s almost universal in biomedical sciences is entirely arbitrary – and, as we’ve seen, it’s quite inadequate as evidence for a real effect. Although it’s common to blame Fisher for the magic value of 0.05, in fact Fisher said, in 1926, that P = 0.05 was a ‘low standard of significance’ and that a scientific fact should be regarded as experimentally established only if repeating the experiment ‘rarely fails to give this level of significance’.

The ‘rarely fails’ bit, emphasised by Fisher 90 years ago, has been forgotten. A single experiment that gives P = 0.045 will get a ‘discovery’ published in the most glamorous journals. So it’s not fair to blame Fisher, but nonetheless there’s an uncomfortable amount of truth in what the physicist Robert Matthews at Aston University in Birmingham had to say in 1998:

‘The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and flukes into funding. It is time to pull the plug.’

The underlying problem is that universities around the world press their staff to write whether or not they have anything to say. This amounts to pressure to cut corners, to value quantity rather than quality, to exaggerate the consequences of their work and, occasionally, to cheat. People are under such pressure to produce papers that they have neither the time nor the motivation to learn about statistics, or to replicate experiments. Until something is done about these perverse incentives, biomedical science will be distrusted by the public, and rightly so. Senior scientists, vice-chancellors and politicians have set a very bad example to young researchers. As the zoologist Peter Lawrence at the University of Cambridge put it in 2007:

hype your work, slice the findings up as much as possible (four papers good, two papers bad), compress the results (most top journals have little space, a typical Nature letter now has the density of a black hole), simplify your conclusions but complexify the material (more difficult for reviewers to fault it!)

But there is good news too. Most of the problems occur only in certain areas of medicine and psychology. And despite the statistical mishaps, there have been enormous advances in biomedicine. The reproducibility crisis is being tackled. All we need to do now is to stop vice-chancellors and grant-giving agencies imposing incentives for researchers to behave badly.

This last paragraph is an egregious act of “FRAMING” – that is diluting and denying what one just said by establishing a “positive” CONTEXT “But there is good news too” “advances in biomedicine” “crisis being tackled” “it’s vice-chancellors and grant-giving agencies fault” (not the poor beleaguered researchers who are “forced to” be dishonest!

Recent History of Socio-Political Anthropology Battles / Important

From Natural History Magazine:

Remembering Stephen Jay Gould

http://www.naturalhistory.com/perspectives/3024131/remembering-stephen-jay-gould

Human evolution was not a special case of anything.

By Ian Tattersall

For long-time readers of Natural History, Stephen Jay Gould needs no introduction. His column, “This View of Life,” was a mainstay of the magazine, starting in January 1974 with “Size and Shape” and concluding with the 300th installment, “I Have Landed,” in the December 2000/January 2001 issue. What made his columns so popular was not just Gould’s range of chosen topics, but also the way he regularly allowed himself to be carried away on any tangent that he found interesting.

Gould died on May 20, 2002. Last spring, on the tenth anniversary of his death, I was invited to join other scholars at a commemorative meeting in Venice organized by the Istituto Veneto di Scienze, Lettere ed Arti in collaboration with the Università Ca’ Foscari. It fell to me, as an anthropologist, to talk about Gould’s intellectual legacy to anthropology. Gould was, of course, anything but a primate specialist. But as it happens, in 1974, the year Gould started writing “This View of Life,” he and I were both invited to attend a specialized meeting on “Phylogeny of the Primates: An Interdisciplinary Approach.” Even at that early stage in his career, I learned, the reach of his writings had broadened well beyond his realms of invertebrate paleontology (he was a fossil-snail expert) and evolutionary theory. He came to address the roles of ontogeny (development of the individual) and neoteny (the evolutionary retention of juvenile traits in adults) in human evolution. What I personally found most interesting, however, was his preprint for the conference, which contained, among much else, a virtuoso canter through the history of human evolutionary studies. He effortlessly displayed mastery of a huge literature on a scale that many professional paleoanthropologists fail to achieve in entire academic lifetimes.

Despite a paucity of strictly technical contributions, there can be no doubt that Gould’s influence on anthropology, and on paleoanthropology in particular, was truly seminal. Foremost among such influences was his 1972 collaboration with Niles Eldredge in developing and publicizing the notion of “punctuated equilibria,” the view that species typically remain little changed during most of their geological history, except for rapid events when they may split to give rise to new, distinct species. This breakthrough enabled paleoanthropologists, like other paleontologists, to treat the famous “gaps” in the fossil record as information, a reflection of how evolution actually proceeded.

Similarly, it was Gould who, in collaboration with Yale paleontologist Elisabeth S. Vrba (then at the Transvaal Museum in Pretoria, South Africa), emphasized that an anatomical or behavioral trait that evolved to serve one function could prove a handy adaptation for an entirely unanticipated one—and that the term exaptation was a better name for this phenomenon than preadaptation, which implied some kind of inherent tendency for a species to follow a certain evolutionary path. Anthropologists were forced to recognize exaptation as an essential theme in the history of innovation in the human family tree.

Speaking of trees, I am convinced that Gould’s most significant contribution to paleoanthropology was his insistence, from very early on, that the genealogy of human evolution took the form of a bush with many branches, rather than a ladder, or simple sequence of ancestors and descendants. As he wrote in his April 1976 column, “Ladders, Bushes, and Human Evolution”:

“I want to argue that the ‘sudden’ appearance of species in the fossil record and our failure to note subsequent evolutionary change within them is the proper prediction of evolutionary theory as we understand it. Evolution usually proceeds by “speciation”—the splitting of one lineage from a parental stock—not by the slow and steady transformation of these large parental stocks. Repeated episodes of speciation produce a bush.”

Before World War II, paleoanthropologists had overwhelmingly been human anatomists by background, with little interest in patterns of diversity in the wider living world. And having been trained largely in a theoretical vacuum, the postwar generation of paleoanthropologists was already exapted to capitulate when, at exact midcentury, the biologist Ernst Mayr told them to throw away nearly all the many names they had been using for fossil hominids. Mayr replaced this plethora, and the diversity it had suggested, with the idea that all fossil hominids known could be placed in a single sequence, from Homo transvaalensis to Homo erectus and culminating in Homo sapiens.

There was admittedly a certain elegance in this new linear formulation; but the problem was that, even in 1950, it was not actually supported by the material evidence. And new discoveries soon made not only most paleoanthropologists but even Mayr himself—grudgingly, in a footnote—concede that at least one small side branch, the so-called “robust” australopithecines, had indeed existed over the course of human evolution. But right up into the 1970s and beyond, the minimalist mindset lingered. Gould’s was among the first—and certainly the most widely influential —voices raised to make paleoanthropologists aware that there was an alternative.

In his “Ladders, Bushes, and Human Evolution” column, Gould declared that he wanted “to argue that Australopithecus, as we know it, is not the ancestor of Homo; and that, in any case, ladders do not represent the path of evolution.” At the time, both statements flatly contradicted received wisdom in paleoanthropology. And while in making the first of them I suspect that Gould was rejecting Australopithecus as ancestral to Homo as a matter of principle, his immediate rationale was based on the recent discovery, in eastern Africa, of specimens attributed to Homo habilis that were just as old as the South African australopithecines.

Later discoveries showed that Gould had been hugely prescient. To provide some perspective here: In 1950, Mayr had recognized a mere three hominid species. By 1993, I was able to publish a hominid genealogy containing twelve. And the latest iteration of that tree embraces twenty-five species, in numerous coexisting lineages. This was exactly what Gould had predicted. In his 1976 article he had written: “We [now] know about three coexisting branches of the human bush. I will be surprised if twice as many more are not discovered before the end of the century.”

Indeed, his impact on the paleoanthropological mindset went beyond even this, largely via his ceaseless insistence that human beings have not been an exception to general evolutionary rules. Before Gould’s remonstrations began, one frequently heard the term “hominization” bandied about, as if becoming human had involved some kind of special process that was unique to our kind. Gould hammered home the message that human evolutionary history was just like that of other mammals, and that we should not be looking at human evolution as a special case of anything.

Of course, Gould had ideas on particular issues in human paleontology as well, and he never shrank from using his Natural History bully pulpit to voice his opinions. Over the years he issued a succession of shrewd and often influential judgments on subjects as diverse as the importance of bipedality as the founding hominid adaptation; the newly advanced African “mitochondrial Eve”; hominid diversity and the ethical dilemmas that might be posed by discovering an Australopithecus alive today; sociobiology and evolutionary psychology (he didn’t like them); the relations between brain size and intelligence; neoteny and the retention of juvenile growth rates into later development as an explanation of the unusual human cranial form; and why human infants are so unusually helpless.

(Removed here; a narrative about the search for who had perpetrated the Piltdow Man hoax)

Gould’s devotion to the historically odd and curious, as well as his concern with the mainstream development of scientific ideas, is also well illustrated by his detailed account of the bizarre nineteenth-century story of Sarah “Saartjie” Baartman. Dubbed the “Hottentot Venus,” Baartman was a Khoisan woman from South Africa’s Western Cape region who was brought to Europe in 1810 and widely exhibited to the public before her death in 1815. Gould’s publicizing of the extraordinary events surrounding and following Baartman’s exhibition may or may not have contributed to the repatriation in 2002 of her remains from Paris to South Africa, where they now rest on a hilltop overlooking the valley in which she was born. But what is certain is that Gould’s interest in this sad case also reflected another of his long-term concerns, with what he called “scientific racism.”

Principally in the 1970s—when memories of the struggle for civil rights in the United States during the previous decade were still extremely raw—Gould devoted a long series of his columns to the subject of racism, as it presented itself in a whole host of different guises. In his very first year of writing for Natural History, he ruminated on the “race problem” both as a taxonomic issue, and in its more political expression in relation to intelligence. He even made the matter personal, with a lucid and deeply thoughtful demolition in Natural History of the purportedly scientific bases for discrimination against Jewish immigrants to America furnished by such savants as H. H. Goddard and Karl Pearson.

Gould also began his long-lasting and more specific campaign against genetic determinism, via a broadside against the conclusions of Arthur Jensen, the psychologist who had argued that education could not do much to level the allegedly different performances of various ethnic groups on IQ tests. And he began a vigorous and still somewhat controversial exploration of the historical roots of “scientific racism” in the work of nineteenth-century embryologists such as Ernst Haeckel and Louis Bolk.

But Gould’s most widely noticed contribution to the race issue began in 1978, with his attack in Science on the conclusions of the early-nineteenth century physician and craniologist Samuel George Morton, whom he characterized rather snarkily as a “self-styled objective empiricist.” In three voluminous works published in Philadelphia between 1839 and 1849—on Native American and ancient Egyptian skulls, and on his own collection of more than 600 skulls of all races—the widely admired Morton had presented the results of the most extensive study ever undertaken of human skulls. The main thrust of (Morton’s) study had been to investigate the then intensely debated question of whether the various races of humankind had a single origin or had been separately created. Morton opted for polygeny, or multiple origins, a conclusion hardly guaranteed to endear him to Gould. Along the way, Morton presented measurements that showed, in keeping with prevailing European and Euro-American beliefs on racial superiority, that Caucasians had larger brains than American “Indians,” who in turn had bigger brains than “Negroes” did. (Cranial-brain size DOES NOT correlate to intelligence)

After closely examining Morton’s data, Gould characterized the Philadelphia savant’s conclusions as “a patchwork of assumption and finagling, controlled, probably unconsciously, by his conventional a priori ranking (his folks on top, slaves on the bottom).” He excoriated Morton for a catalog of sins that included inconsistencies of criteria, omissions of both procedural and convenient kinds, slips and errors, and miscalculations. And although in the end he found “no indication of fraud or conscious manipulation,” he did see “Morton’s saga” as an “egregious example of a common problem in scientific work.” As scientists we are all, Gould asserted, unconscious victims of our preconceptions, and the “only palliations I know are vigilance and scrutiny.”

That blanket condemnation of past and current scientific practice was a theme Gould shortly returned to, with a vengeance, in his 1981 volume The Mismeasure of Man. Probably no book Gould ever wrote commanded wider attention than did this energetic critique of the statistical methods that had been used to substantiate one of his great bêtes noires, biological determinism. This was (is) the belief, as Gould put it, that “the social and economic differences between human groups—primarily races, classes, and sexes—arise from inherited, inborn distinctions and that society, in this sense, is an accurate reflection of biology.”

We are still plagued by this pseudo-scientific “justification” of poverty and inequality; of misogyny and abuse of “lesser humans” by the Human Behavior Industries. Remember, this is very recent history, and the forces of social “control and abuse” are very much still with us.  

It is alarming that the revolution in DNA / genetic research has shifted the “means” of this abuse of human beings into a radical effort to “prove” that socially-created and defined “human behavior pathologies” are due to genetic determinism. The race is on to “prove” that genetic defects, rather than hidden social engineering goals, underlie “defective behavior and thinking” as dictated by closet eugenicists. Racism and eugenics are being pursued in the guise of “caring, treating and fixing” socially “defective” peoples. Genetic engineering of embryos is already in progress

SEE POST August 11, 2017: First Human Embryos ‘Edited’ in U.S. / 7 billion humans not consulted

In Mismeasure, Gould restated his case against Morton at length, adding to the mix a robust rebuttal of methods of psychological testing that aimed at quantifying “intelligence” as a unitary attribute. One of his prime targets was inevitably Arthur Jensen, the psychologist he had already excoriated in the pages of Natural History for Jensen’s famous conclusion that the Head Start program, designed to improve low-income children’s school performance by providing them with pre-school educational, social, and nutritional enrichment, was doomed to fail because the hereditary component of their performance—notably that of African American children—was hugely dominant over the environmental one. A predictable furor followed the publication of Mismeasure, paving the way for continuing controversy during the 1980s and 1990s on the question of the roles of nature versus nurture in the determination of intelligence.

This issue of nature versus nurture, a choice between polar opposites, was of course designed for polemic, and attempts to find a more nuanced middle ground have usually been drowned out by the extremes. So it was in Gould’s case. An unrepentant political liberal, he was firmly on the side of nurture. As a result of his uncompromising characterizations of his opponents’ viewpoints, Gould found himself frequently accused by Jensen and others of misrepresenting their positions and of erecting straw men to attack.

Yet even after Mismeasure first appeared, the climax of the debate was yet to come. In 1994, Richard Herrnstein and Charles Murray published their notorious volume, The Bell Curve: Intelligence and Class Structure in American Life. At positively Gouldian length, Herrnstein and Murray gave a new boost to the argument that intelligence is largely inherited, proclaiming that innate intelligence was a better predictor of such things as income, job performance, chances of unwanted pregnancy, and involvement in crime than are factors such as education level or parental socioeconomic status. They also asserted that, in America, a highly intelligent, “cognitive elite” was becoming separated from the less intelligent underperforming classes, and in consequence they recommended policies such as the elimination of what they saw as welfare incentives for poor women to have children.

Eugenics has never died in American Science; it remains an underestimated force in the shaping of “what do about unacceptable humans”. It is neither a liberal nor conservative impulse: it is a drive within elites to control human destiny.

To Gould such claims were like the proverbial red rag to a bull. He rapidly published a long review essay in The New Yorker attacking the four assertions on which he claimed Herrnstein and Murray’s argument depended. In order to be true, Gould said, Herrnstein and Murray’s claims required that that what they were measuring as intelligence must be: (1) representable as a single number; (2) must allow linear rank ordering of people; (3) be primarily heritable; and (4) be essentially immutable. None of those assumptions, he declared, was tenable. And soon afterward he returned to the attack with a revised and expanded edition of Mismeasure that took direct aim at Herrnstein and Murray’s long book.

There can be little doubt that, as articulated in both editions of Mismeasure, Gould’s conclusions found wide acceptance not only among anthropologists but in the broader social arena as well. But doubts have lingered about Gould’s broad-brush approach to the issues involved, and particularly about a penchant he had to neglect any nuance there might have been in his opponents’ positions. Indeed, he was capable of committing in his own writings exactly the kinds of error of which he had accused Samuel Morton—ironically, even in the very case of Morton himself.

In June 2011, a group of physical anthropologists led by Jason Lewis published a critical analysis of Gould’s attacks on Morton’s craniology. By remeasuring the cranial capacities of about half of Morton’s extensive sample of human skulls, Lewis and colleagues discovered that the data reported by Morton had on the whole been pretty accurate. They could find no basis in the actual specimens themselves for Gould’s suggestion that Morton had (albeit unconsciously) overmeasured European crania, and under-measured African or Native American ones. What’s more, they could find no evidence that, as alleged by Gould, Morton had selectively skewed the results in various other ways.

The anthropologists did concede that Morton had attributed certain psychological characteristics to particular racial groups. But they pointed out that, while Morton was inevitably a creature of his own times, he (Morton) had done nothing to disguise his racial prejudices or his polygenist sympathies. And they concluded that, certainly by prevailing standards, Morton’s presentation of his basic data had been pretty unbiased. (WOW! What an indictment of current Anthropology) What is more, while they were able to substantiate Gould’s claim that Morton’s final summary table of his results contained a long list of errors, Lewis and colleagues also found that correcting those errors would actually have served to reinforce Morton’s own declared biases. And they even discovered that Gould had reported erroneous figures of his own.

These multiple “errors” DO NOT cancel each other out: this is a favorite social typical strategy and magical belief – Present the contradictions from “each side” and reach a “socially acceptable” deadlock. No discussion is possible past this point. The American intellectual-cultural-political environment is trapped in this devastating “black and white, either or, false concept of “problem-solving”. Nothing can be examined; facts are removed to the “supernatural, word-concept domain” and become “politicized” – weapons of distortion in a socio-cultural landscape of perpetual warfare. In the meantime, the population is pushed to either extreme. This is where we are TODAY and this “warfare” will destroy us from within, because the hard work of running a nation is not being done.

It is hard to refute the authors’ conclusion that Gould’s own unconscious preconceptions colored his judgment. Morton, naturally enough, carried all of the cultural baggage of his time, ethnicity, and class. But so, it seems, did Gould. And in a paradoxical way, Gould had proved his own point. Scientists are human beings, and when analyzing evidence they always have to be on guard against the effects of their own personal predilections.

And of the domination and control of their professions by the “elite and powerful” who promote a racist-eugenic social order and control how their work is “messaged” and used to achieve socioeconomic and biological engineering goals – worldwide.