OMG! I’m Back

New PC delivered – 

Now to get everything reset in the way that I like it to appear…

 

Advertisements

The Odyssey, Irma and related thoughts

Still using library internet access…

Ordered new computer, but waiting for delivery – could be 10 more days. I’m beginning to FREAK OUT! WHY? Not because I have some “pathological” Asperger attachment to habit or objects – it’s the tool I need to communicate “what’s going on” in my “unconscious visual processing” in the primary language of “social reality” – words. 

I’m lucky to live in a time and place where this arrangement is possible: a reclusive existence in wild Wyoming, but with the ability to express my thoughts to a mysterious “global” world – unknown people from every part of the planet continue to “tune in” (maybe by accident?) It is “mind-boggling” from my point of view from the “Frontier” which lacks modern social development and material abundance.

I’m momentarily fed up with rereading JUNG: do psychologists actually “like” or approve of any human beings (even themselves?) It is quite revealing how with time and experience, one’s view of “standard ideas” is changed and reviewed.

I try to reread the Iliad and the Odyssey on alternate years, so have taken the opportunity to read the Odyssey – coincidentally, while half-listening to coverage of hurricane Irma… (many reactions and thoughts, which will have to wait) but having to do with how modern people see Nature, and how cultural values are shaped as a consequence; very “odd” feelings and ideas which in turn shape our behavior! 

My fascination with both books goes deep: the two are foundations for much of my “introverted” thinking about culture, history and admirable human codes of behavior and interaction that have fallen into forgetfulness: PLUS these are highly dense visual presentations that “speak to me” like few others. At times, the “visual” descriptions come so fast and furious, that I can’t keep up my brain processing speed to match, and I must linger over those descriptions, which “tell me” so much about the people of that time. And which, in a way, make me “homesick”.

AND – Once again (Irma event) I am utterly appalled by the ignorance (as in ignoring the entire subject) of Americans concerning the processes and reality of “geology” in its true scope – a study which reveals How the earth, oceans, atmosphere and “cosmic” location WORK!

American “education” is the “manmade”  disaster that cripples reasonable and effective behavior!

Hmmm. Someone has brought a screaming toddler, possibly named Irma, into the library… time to “evacuate”.

 

 

 

Asperger Endurance Test / Public Internet Access Day 12

My computer crisis has offered insight into my own reaction to “change” – something that is supposed to produce “trauma” (or at least a negative reaction) in Asperger’s.

The good news is that I’ve just ordered a replacement! I’m not going to say “case solved” until I’m in my own comfy computer room (refrigerator and coffee pot close by) as well as having access to the distractions that help me to think!

But, the extended decision of which technological device to buy (12 days) likely seems outrageous, given the “modern” need for every need to be met at once. The “delay” is the result of both my location (middle of nowhere) where there is ZERO help or even a place to purchase a “computer” except at Walmart in the next town. (Nightmare scenario – 40,000 choices in the deodorant aisle, but only two computers, both junked up for playing games.)

And  – my specific needs – in short, my reaction “looks” typically Asperger:

1. initial panic 2. Confrontation with the “absurd” gauntlet to be run – (no help; no ability to view in person what’s available “out there” and to interact with a knowledgeable human being) 3. The non-help offered by neurotypicals: “Get a smart phone. That’s all you need.” 4. The shock that this statement is true for neurotypicals, who don’t need a keyboard, word processing software, multiple types of connectors and adapters, etc or “real” photo-processing.

“Look! I can watch movies on my 3″ x 4″ phone screen.” Total blank when one asks, “Why on earth would I watch a movie on a screen that size? You can’t see a damn thing!” Hint – neurotypicals don’t actually “see” things due to their “inattentional blindness”. And then the reaction of utter bafflement when I reveal that I actually WRITE a blog and need a full keyboard and word processing capability.

Being Asperger does make “communication” more involved, because we actually wish to communicate about “real things” and often have something to say. Computers make that possible for me – considering my type of hyposocial brain, I could never accomplish any “reach out and talk to someone activity” without the Internet, nor accomplish the basic mechanics of writing without word processing. My handwriting is abominable; I think visually and non-linearly, and could not “translate” my thinking into words without the ease of copy-pasting text fragments into somewhat “conventional order” or without the speed of instant editing and rewriting.

It’s probably obvious that I went “old school” with my choice of “device” – I  hate talking on the phone to start with.

..

Asperger’s may be old-fashioned, but we know what we like and want and “hold out” for what suits us!

Do Statistics Lie? Yes They Do / 3 Articles Explain HOW AND WHO

This scandalous practice of deceit-for-funding-and-profit is why I persist in slamming psychology as “not science”

It’s not only that these are research scams that waste funding and devalue science;  human beings are harmed as a result from this abuse of statistics. Asperger and neurodiverse types are being “defined” as “defective” human beings: there is no scientific basis for this “socially-motivated” construct. The current Autism-ASD-Asperger Industry is a FOR PROFIT INDUSTRY that exploits individuals, their families, schools, communities, tax-payers and funding for research. It also serves to enforce “the social order” dictated by elites.

The Mind-Reading Salmon: The True Meaning of Statistical Significance

By Charles Seife on August 1, 2011 16

If you want to convince the world that a fish can sense your emotions, only one statistical measure will suffice: the p-value.

The p-value is an all-purpose measure that scientists often use to determine whether or not an experimental result is “statistically significant.” Unfortunately, sometimes the test does not work as advertised, and researchers imbue an observation with great significance when in fact it might be a worthless fluke.

Say you’ve performed a scientific experiment testing a new heart attack drug against a placebo. At the end of the trial, you compare the two groups. Lo and behold, the patients who took the drug had fewer heart attacks than those who took the placebo. Success! The drug works!

Well, maybe not. There is a 50 percent chance that even if the drug is completely ineffective, patients taking it will do better than those taking the placebo. (After all, one group has to do better than the other; it’s a toss-up whether the drug group or placebo group will come up on top.)

The p-value puts a number on the effects of randomness. It is the probability of seeing a positive experimental outcome even if your hypothesis is wrong. A long-standing convention in many scientific fields is that any result with a p-value below 0.05 is deemed statistically significant. An arbitrary convention, it is often the wrong one. When you make a comparison of an ineffective drug to a placebo, you will typically get a statistically significant result one time out of 20. And if you make 20 such comparisons in a scientific paper, on average, you will get one signif­icant result with a p-value less than 0.05—even when the drug does not work.

Many scientific papers make 20 or 40 or even hundreds of comparisons. In such cases, researchers who do not adjust the standard p-value threshold of 0.05 are virtually guaranteed to find statistical significance in results that are meaningless statistical flukes. A study that ran in the February issue of the American Journal of Clinical Nutrition tested dozens of compounds and concluded that those found in blueberries lower the risk of high blood pressure, with a p-value of 0.03. But the researchers looked at so many compounds and made so many comparisons (more than 50), that it was almost a sure thing that some of the p-values in the paper would be less than 0.05 just by chance.

The same applies to a well-publicized study that a team of neuroscientists once conducted on a salmon. When they presented the fish with pictures of people expressing emotions, regions of the salmon’s brain lit up. The result was statistically signif­icant with a p-value of less than 0.001; however, as the researchers argued, there are so many possible patterns that a statistically significant result was virtually guaranteed, so the result was totally worthless. p-value notwithstanding, there was no way that the fish could have reacted to human emotions. The salmon in the fMRI happened to be dead.

________________________________

Statistical Significance Abuse

A lot of research makes scientific evidence seem more “significant” than it is

updated Sep 15, 2016 (first published 2011) by Paul Ingraham, Vancouver, Canada 

I am a science writer and a former Registered Massage Therapist with a decade of experience treating tough pain cases. I was the Assistant Editor of ScienceBasedMedicine.org for several years.

SUMMARY

Many study results are called “statistically significant,” giving unwary readers the impression of good news. But it’s misleading: statistical significance means only that the measured effect of a treatment is probably real (not a fluke). It says nothing about how large the effect is. Many small effect sizes are reported only as “statistically significant” — it’s a nearly standard way for biased researchers to make it found like they found something more important than they did.

This article is about two common problems with “statistical significance” in medical research. Both problems are particularly rampant in the study of massage therapy, chiropractic, and alternative medicine in general, and are wonderful examples of why science is hard, “why most published research findings are false” and genuine robust treatment effects are rare:

  1. mixing up statistical and clinical significance and the probability of being “right”
  2. reporting statistical significance of the wrong dang thing

Significance Problem #1 Two flavours of “significant”: statistical versus clinical

Research can be statistically significant, but otherwise unimportant. Statistical significance means that data signifies something… not that it actually matters.

Statistical significance on its own is the sound of one hand clapping. But researchers often focus on the the positive: “Hey, we’ve got statistical significance! Maybe!” So they summarize their findings as “significant” without telling us the size of the effect they observed, which is a little devious or sloppy. Almost everyone is fooled by this — except 98% of statisticians — because the word “significant” carries so much weight. It really sounds like a big deal, like good news. But it’s like bragging about winning a lottery without mentioning that you only won $25.

Statistical significance without other information really doesn’t mean all that much. It is not only possible but common to have clinically trivial results that are nonetheless statistically significant. How much is that statistical significance is worth? It depends … on details that are routinely omitted; which is convenient if you’re pushing a pet theory, isn’t it?

Imagine a study of a treatment for pain, which has a statistically significant effect, but it’s a tiny effect: that is, it only reduces pain slightly. You can take that result to the bank (supposedly) — it’s real! It’s statistically significant! But … no more so than a series of coin flips that yields enough heads in a row to raise your eyebrows. And the effect was still tiny. So calling these results “significant” is using math to put lipstick on a pig.

There are a lot of decorated pigs in research: “significant” results that are possibly not even that, and clinically boring in any case.

Just because a published paper presents a statistically significant result does not mean it necessarily has a biologically meaningful effect.

++++++++++++++++++++++++++++++++

Science Left Behind: Feel-Good Fallacies and the Rise of the Anti-Scientific Left, Alex Berezow & Hank Campbell

If you torture data for long enough, it will confess to anything.

P-values, where P stands for “please stop the madness”

Small study proves showers work Too often people smugly dismiss a study just because of small sample size, ignoring all other considerations, like effect size … a rookie move. For instance, you really do not need to test lots of showers to prove that they are an effective moistening procedure. The power of a study is a product of both sample and effect size (and more).

Statistical significance is boiled down to one convenient number: the infamous, cryptic, bizarro and highly over-rated P-value. Cue Darth vader theme. This number is “diabolically difficult” to understand and explain, and so p-value illiteracy and bloopers are epidemic (Goodman identifies ““A dirty dozen: twelve p-value misconceptions””4). It seems to be hated by almost everyone who actually understands it, because almost no one else does. Many believe it to be a blight on modern science.5 Including the American Statistical Association — and if they don’t like it, should you?

The mathematical soul of the p-value is, frankly, not really worth knowing. It’s just not that fantastic an idea. The importance of scientific research results cannot be jammed into a single number (and nor was that ever the intent). And so really wrapping your head around it no more important than learning the gritty details of the Rotten Tomatoes algorithm when you’re trying to decide whether to see that new Godzilla (2014) movie.7

What you do need to know is the role that p-values play in research today. You need to know that “it depends” is a massive understatement, and that there are “several reasons why the p-value is an unobjective and inadequate measure of evidence”8 Because it is so often abused, it’s way more important to know what the p-value is NOT than what it IS. For instance, it’s particularly useless when applied to studies of really outlandish ideas. And yet it’s one of the staples of pseudoscience, because it is such an easy way to make research look better than it is.

Above all, a good p-value is not a low chance that the results were a fluke or false alarm — which is by far the most common misinterpretation (and the first of Goodman’s Dirty Dozen). The real definition is a kind of mirror image of that:11 it’s not a low chance of a false alarm, but a low chance of an effect that actually is a false alarm. The false alarm is a given! That part of the equation is already filled in, the premise of every p-value. For better or worse, the p-value is the answer to this question: if there really is nothing going on here, what are the odds of getting these results? A low number is encouraging, but it doesn’t say the results aren’t a fluke, because it can’t — it was calculated by assuming they are.

The only way to actually find out if the effect is real or a fluke is to do more experiments. If they all produce results that would be unlikely if there was no real effect, then you can say the results are probably real. The p-value alone can only be a reason to check again — not statistical congratulations on a job well done. And yet that’s exactly how most researchers use it. And most science journalists.

The problem with p-values

Academic psychology and medical testing are both dogged by unreliability. The reason is clear: we got probability wrong

The aim of science is to establish facts, as accurately as possible. It is therefore crucially important to determine whether an observed phenomenon is real, or whether it’s the result of pure chance. If you declare that you’ve discovered something when in fact it’s just random, that’s called a false discovery or a false positive. And false positives are alarmingly common in some areas of medical science. 

In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’, focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations.

For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?

The problem of how to distinguish a genuine observation from random chance is a very old one. It’s been debated for centuries by philosophers and, more fruitfully, by statisticians. It turns on the distinction between induction and deduction. Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask. 

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what would be expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

Confusion between these two quite different probabilities lies at the heart of why p-values are so often misinterpreted. It’s called the error of the transposed conditional. Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong.

Suppose, for example, that you give a pill to each of 10 people. You measure some response (such as their blood pressure). Each person will give a different response. And you give a different pill to 10 other people, and again get 10 different responses. How do you tell whether the two pills are really different?

The conventional procedure would be to follow Fisher and calculate the probability of making the observations (or the more extreme ones) if there were no true difference between the two pills. That’s the p-value, based on deductive reasoning. P-values of less than 5 per cent have come to be called ‘statistically significant’, a term that’s ubiquitous in the biomedical literature, and is now used to suggest that an effect is real, not just chance.

But the dichotomy between ‘significant’ and ‘not significant’ is absurd. There’s obviously very little difference between the implication of a p-value of 4.7 per cent and of 5.3 per cent, yet the former has come to be regarded as success and the latter as failure. And ‘success’ will get your work published, even in the most prestigious journals. That’s bad enough, but the real killer is that, if you observe a ‘just significant’ result, say P = 0.047 (4.7 per cent) in a single test, and claim to have made a discovery, the chance that you are wrong is at least 26 per cent, and could easily be more than 80 per cent. How can this be so?

For one, it’s of little use to say that your observations would be rare if there were no real difference between the pills (which is what the p-value tells you), unless you can say whether or not the observations would also be rare when there is a true difference between the pills. Which brings us back to induction.

The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). But how to use his famous theorem in practice has been the subject of heated debate ever since.

Take the proposition that the Earth goes round the Sun. It either does or it doesn’t, so it’s hard to see how we could pick a probability for this statement. Furthermore, the Bayesian conversion involves assigning a value to the probability that your hypothesis is right before any observations have been made (the ‘prior probability’). Bayes’s theorem allows that prior probability to be converted to what we want, the probability that the hypothesis is true given some relevant observations, which is known as the ‘posterior probability’.

These intangible probabilities persuaded Fisher that Bayes’s approach wasn’t feasible. Instead, he proposed the wholly deductive process of null hypothesis significance testing. The realisation that this method, as it is commonly used, gives alarmingly large numbers of false positive results has spurred several recent attempts to bridge the gap.  

There is one uncontroversial application of Bayes’s theorem: diagnostic screening, the tests that doctors give healthy people to detect warning signs of disease. They’re a good way to understand the perils of the deductive approach.

In theory, picking up on the early signs of illness is obviously good. But in practice there are usually so many false positive diagnoses that it just doesn’t work very well. Take dementia. Roughly 1 per cent of the population suffer from mild cognitive impairment, which might, but doesn’t always, lead to dementia. Suppose that the test is quite a good one, in the sense that 95 per cent of the time it gives the right (negative) answer for people who are free of the condition. That means that 5 per cent of the people who don’t have cognitive impairment will test, falsely, as positive. That doesn’t sound bad. It’s directly analogous to tests of significance which will give 5 per cent of false positives when there is no real effect, if we use a p-value of less than 5 per cent to mean ‘statistically significant’.

But in fact the screening test is not good – it’s actually appallingly bad, because 86 per cent, not 5 per cent, of all positive tests are false positives. So only 14 per cent of positive tests are correct. This happens because most people don’t have the condition, and so the false positives from these people (5 per cent of 99 per cent of the people), outweigh the number of true positives that arise from the much smaller number of people who have the condition (80 per cent of 1 per cent of the people, if we assume 80 per cent of people with the disease are detected successfully). There’s a YouTube video of my attempt to explain this principle, or you can read my recent paper on the subject.

Notice, though, that it’s possible to calculate the disastrous false-positive rate for screening tests only because we have estimates for the prevalence of the condition in the whole population being tested. This is the prior probability that we need to use Bayes’s theorem. If we return to the problem of tests of significance, it’s not so easy. The analogue of the prevalence of disease in the population becomes, in the case of significance tests, the probability that there is a real difference between the pills before the experiment is done – the prior probability that there’s a real effect. And it’s usually impossible to make a good guess at the value of this figure.

An example should make the idea more concrete. Imagine testing 1,000 different drugs, one at a time, to sort out which works and which doesn’t. You’d be lucky if 10 per cent of them were effective, so let’s proceed by assuming a prevalence or prior probability of 10 per cent.  Say we observe a ‘just significant’ result, for example, a P = 0.047 in a single test, and declare that this is evidence that we have made a discovery. That claim will be wrong, not in 5 per cent of cases, as is commonly believed, but in 76 per cent of cases. That is disastrously high. Just as in screening tests, the reason for this large number of mistakes is that the number of false positives in the tests where there is no real effect outweighs the number of true positives that arise from the cases in which there is a real effect.

In general, though, we don’t know the real prevalence of true effects. So, although we can calculate the p-value, we can’t calculate the number of false positives. But what we can do is give a minimum value for the false positive rate. To do this, we need only assume that it’s not legitimate to say, before the observations are made, that the odds that an effect is real are any higher than 50:50. To do so would be to assume you’re more likely than not to be right before the experiment even begins.

If we repeat the drug calculations using a prevalence of 50 per cent rather than 10 per cent, we get a false positive rate of 26 per cent, still much bigger than 5 per cent. Any lower prevalence will result in an even higher false positive rate.

The upshot is that, if a scientist observes a ‘just significant’ result in a single test, say P = 0.047, and declares that she’s made a discovery, that claim will be wrong at least 26 per cent of the time, and probably more.

No wonder then that there are problems with reproducibility in areas of science that rely on tests of significance.

What is to be done? For a start, it’s high time that we abandoned the well-worn term ‘statistically significant’. The cut-off of P < 0.05 that’s almost universal in biomedical sciences is entirely arbitrary – and, as we’ve seen, it’s quite inadequate as evidence for a real effect. Although it’s common to blame Fisher for the magic value of 0.05, in fact Fisher said, in 1926, that P = 0.05 was a ‘low standard of significance’ and that a scientific fact should be regarded as experimentally established only if repeating the experiment ‘rarely fails to give this level of significance’.

The ‘rarely fails’ bit, emphasised by Fisher 90 years ago, has been forgotten. A single experiment that gives P = 0.045 will get a ‘discovery’ published in the most glamorous journals. So it’s not fair to blame Fisher, but nonetheless there’s an uncomfortable amount of truth in what the physicist Robert Matthews at Aston University in Birmingham had to say in 1998:

‘The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and flukes into funding. It is time to pull the plug.’

The underlying problem is that universities around the world press their staff to write whether or not they have anything to say. This amounts to pressure to cut corners, to value quantity rather than quality, to exaggerate the consequences of their work and, occasionally, to cheat. People are under such pressure to produce papers that they have neither the time nor the motivation to learn about statistics, or to replicate experiments. Until something is done about these perverse incentives, biomedical science will be distrusted by the public, and rightly so. Senior scientists, vice-chancellors and politicians have set a very bad example to young researchers. As the zoologist Peter Lawrence at the University of Cambridge put it in 2007:

hype your work, slice the findings up as much as possible (four papers good, two papers bad), compress the results (most top journals have little space, a typical Nature letter now has the density of a black hole), simplify your conclusions but complexify the material (more difficult for reviewers to fault it!)

But there is good news too. Most of the problems occur only in certain areas of medicine and psychology. And despite the statistical mishaps, there have been enormous advances in biomedicine. The reproducibility crisis is being tackled. All we need to do now is to stop vice-chancellors and grant-giving agencies imposing incentives for researchers to behave badly.

This last paragraph is an egregious act of “FRAMING” – that is diluting and denying what one just said by establishing a “positive” CONTEXT “But there is good news too” “advances in biomedicine” “crisis being tackled” “it’s vice-chancellors and grant-giving agencies fault” (not the poor beleaguered researchers who are “forced to” be dishonest!

War is a MALE Social Activity / Nukes

Who will be “King” of a dead planet?

My childhood story wrote itself, directed by an impulse to challenge The Official Story, which never did make sense to me. First, there was the story my parents told about their marriage. I would listen to their private histories, both sad and tragic, and wonder why these obvious strangers insisted that finding each other and committing to an unworkable lifelong union was the best of all possible outcomes. Each parent had chosen to add to each other’s suffering by making a brief courtship legal, when apart, each could have pursued happiness. Why would any person do this?

It’s a simple question, but thousands of years of myth, religion, rules and laws, social convention, government institutions, and even reform and innovation in these areas, promote suffering, which has been elevated to the unshakeable position of human destiny. It wasn’t that I imagined a perfect world; I could not imagine why, when suffering exists as an inescapable consequence of being physical creatures, one would choose to voluntarily increase that suffering, and yet, it seemed to me that human beings put great effort into that outcome.

The consequences of choice preoccupied my mind. It took a long time for the reality to sink in: many people don’t recognize that they can make independent choices; their “choices” have been  predestined by a belief system that is so powerful that everything they do is shadowed by the question, What am I supposed to do?” It was shocking to me that people suffered unnecessarily by sticking to roles that had been proven over and over again to result in physical and mental harm to both individuals and groups, and which brought humankind to a state of nearly universal and chronic suffering.

Technology and science appeared as bright spots in the dead gray fog of human behavior that plagued mankind. Radio, television, household appliances, bicycles, automobiles, photography, hot running water, antibiotics, aspirin, eyeglasses – all were advances in comfort, health and pleasure. But! On the new and mysterious TV in our living room, movies were shown that dramatized war and the “wonderful machines of war’ that man had created. Soldiers were happy to be able to help out, as if they were at a communal barn-raising. They looked forward to killing strangers, whether men, women, children or animals, known as The Bad Guys, using guns, knives, grenades and flamethrowers to mangle, maim, and roast people alive. They did this, and then smoked cigarettes. War was fun: a joyful guy thing. The actual horror was ignored, except for an occasional hospital scene where doctors and nurses fixed wounded men so that they could go back and kill more people, or inevitably for some, to be killed. The reward for death and suffering was a cigarette if you lived and a flag and a speech about patriotism if you died.

I couldn’t imagine participating in a war, inflicting pain and death in horrific ways, and also risk my own life – for what? My life was given to me and was sacred. It didn’t belong to anyone else, especially to Big Men who were so careless as to throw lives away so easily.

The usual answer given to children was that there are The Bad Guys, and you have to kill The Bad Guys.

This wasn’t an answer simplified for a child; this was The Answer. It still is.

Soldiers usually do know, once there are at war, that they are being used by the Big Men (human predators) to do their killing.

Many soldiers realize, once they are at war, that they are being used by the Big Men (human predators) to do their killing.

The Korean War began in 1950: we rushed in to "save" Korea from the communists: the country is still divided and 28,000 U.S. troops are still deployed there, 64 years later.

The Korean War began in 1950: we rushed in to “save” Korea from the communists: the country ended up being divided, and 28,000 U.S. troops are still deployed in S. Korea 64 years later.

Few American young people have any idea that the U.S. we invaded Viet Nam, lost, and had to hand the country over to the communist Viet Cong.

Few American young people have any idea that the U.S. invaded Viet Nam, lost the war, with 58,000 dead American soldiers and lost the country to the communist Viet Cong.

Better not ask the question, “How can God be on our side and theirs, too? Everyone says God is always on our side, therefore we are The Good Guys, but The Bad Guys say the same thing. It’s this loopy thinking that keeps people stuck. Why can’t people exit the loop?”

If one pressed the question of war, supplementary answers appeared: the technology developed in war time benefits civilians later. Improved emergency medical techniques, antibiotics, more accurate clocks, fast computers, and many other gadgets were developed to better prosecute war. I found it absurd and shocking that we must have wars in which millions suffer and die so that Mom can cook in a microwave oven and I can take penicillin for a strep throat. Isn’t the suffering brought by disease or accident sufficient motivation to develop medical treatments? The Bull Shit  kept getting deeper.

I lived with a distinct biting anxiety over my obvious lack of sympathy for traditional ideas, which were presented as demands by those who had secured a rung of authority on The Pyramid. Lies were everywhere: in school, at church, at home, on television and in newspapers. I devoured  history books, and biographies of artists, scientists and adventurers – many of whom were people who defied The Official Story, not as bad guys or crusader or reformers, but because alternative explanations made more sense. They often had to hide their work and lived precarious lives, only to have their ideas rediscovered much later, when people discovered profit in their ideas. A happy few gained protection from a powerful patron, and saw their ideas exploited to perpetuate The Official Story that war is necessary, and isn’t it great to have bigger and better weapons, so that our side can kill more and more of The Bad Guys, and whole swathes of innocent bystanders who somehow get in the way.

I listened to educated people make abundant excuses as to why any improvement  is impossible, or must be carried out in the way it has always been done, despite acknowledged failures, as if they were driving forward, but with the parking brake set. “Let’s just throw some platitudes and money at the problem. Maybe it will stick,” is proof that humans are not very smart. Social humans claim to possess all sorts of intelligence and problem-solving skills, and then fall flat on their faces in the same old ruts.

After a lifetime of wondering why humans make life intolerable, I was informed that I am Asperger, which means that I’m not a Social human, but I still have to wait for the nukes to fall, just like everyone else…

 

The Mismeasure of Man / Math and Medicine / Stanford Video

Let’s face it: Human interpretations of “technology-data” suck.

Perception does not equal reality. Observer variation messes up medical decisions.

Making Stone Tools / A Non-verbal Process

A super group of videos…

Not a single word is needed to do this or to teach someone else to do this. The “tips” at the end would be demonstrated during the process. Children would see tools and other objects made day in and day out and would naturally copy their elders.

Archaeologists go on and on about how it takes “advanced cognitive skills” (like those needed to push around a shopping cart and swipe a credit card) to create stone tools. I have yet to hear a single researcher mention visual thinking. You can babble at a pile of stones, or another human, all day long, but all that yack-yacking will not produce one stone tool. The earliest stone tools are millions of years old; sophisticated flaked tools (Acheulean) were invented by Homo erectus, not Homo sapiens. Some research indicates that ‘language’ structure had its beginnings in sign language and not in vocalization. Pre and early humans were visual observers,  inventors and communicators – and not at all like modern social humans, who are a very recent “neotenic” variation of Homo sapiens.

All it takes is A FEW adept individuals to preserve techniques and to pass on skills. If a group were lucky, one “genius” might come up with improvements and refinements so that technical advancement could occur – which would probably be forgotten and reinvented many times. And critically, resources in one’s environment dictated solutions: nomadism provided exposure to new raw materials and new people, so “itchy feet” were likely more advantageous than staying in one place too long.

 

 

Debunking Left Brain, Right Brain Myth / Paper – U. Utah Neuroscience

An Evaluation of the Left-Brain vs. Right-Brain Hypothesis with Resting State Functional Connectivity Magnetic Resonance Imaging

Jared A. Nielsen , et al, Affiliation Interdepartmental Program in Neuroscience, University of Utah, Salt Lake City, Utah, United States of America (See original for authors and affiliations)

Published: August 14, 2013

https://doi.org/10.1371/journal.pone.0071275 (Extensive paper with loads of supporting graphics, etc.) (Heavy going technical paper)

Abstract

Lateralized brain regions subserve functions such as language and visuospatial processing. It has been conjectured that individuals may be left-brain dominant or right-brain dominant based on personality and cognitive style, but neuroimaging data has not provided clear evidence whether such phenotypic differences in the strength of left-dominant or right-dominant networks exist. We evaluated whether strongly lateralized connections covaried within the same individuals. Data were analyzed from publicly available resting state scans for 1011 individuals between the ages of 7 and 29. For each subject, functional lateralization was measured for each pair of 7266 regions covering the gray matter at 5-mm resolution as a difference in correlation before and after inverting images across the midsagittal plane. The difference in gray matter density between homotopic coordinates was used as a regressor to reduce the effect of structural asymmetries on functional lateralization. Nine left- and 11 right-lateralized hubs were identified as peaks in the degree map from the graph of significantly lateralized connections. The left-lateralized hubs included regions from the default mode network (medial prefrontal cortex, posterior cingulate cortex, and temporoparietal junction) and language regions (e.g., Broca Area and Wernicke Area), whereas the right-lateralized hubs included regions from the attention control network (e.g., lateral intraparietal sulcus, anterior insula, area MT, and frontal eye fields). Left- and right-lateralized hubs formed two separable networks of mutually lateralized regions. Connections involving only left- or only right-lateralized hubs showed positive correlation across subjects, but only for connections sharing a node. Lateralization of brain connections appears to be a local rather than global property of brain networks, and our data are not consistent with a whole-brain phenotype of greater “left-brained” or greater “right-brained” network strength across individuals. Small increases in lateralization with age were seen, but no differences in gender were observed.

From Discussion

In popular reports, “left-brained” and “right-brained” have become terms associated with both personality traits and cognitive strategies, with a “left-brained” individual or cognitive style typically associated with a logical, methodical approach and “right-brained” with a more creative, fluid, and intuitive approach. Based on the brain regions we identified as hubs in the broader left-dominant and right-dominant connectivity networks, a more consistent schema might include left-dominant connections associated with language and perception of internal stimuli, and right-dominant connections associated with attention to external stimuli.

Yet our analyses suggest that an individual brain is not “left-brained” or “right-brained” as a global property, but that asymmetric lateralization is a property of individual nodes or local subnetworks, and that different aspects of the left-dominant network and right-dominant network may show relatively greater or lesser lateralization within an individual. If a connection involving one of the left hubs is strongly left-lateralized in an individual, then other connections in the left-dominant network also involving this hub may also be more strongly left lateralized, but this did not translate to a significantly generalized lateralization of the left-dominant network or right-dominant network. Similarly, if a left-dominant network connection was strongly left lateralized, this had no significant effect on the degree of lateralization within connections in the right-dominant network, except for those connections where a left-lateralized connection included a hub that was overlapping or close to a homotopic right-lateralized hub.

It is also possible that the relationship between structural lateralization and functional lateralization is more than an artifact. Brain regions with more gray matter in one hemisphere may develop lateralization of brain functions ascribed to those regions. Alternately, if a functional asymmetry develops in a brain region, it is possible that there may be hypertrophy of gray matter in that region. The extent to which structural and functional asymmetries co-evolve in development will require further study, including imaging at earlier points in development and with longitudinal imaging metrics, and whether asymmetric white matter projections [52], [53] contribute to lateralization of functional connectivity.

We observed a weak generalized trend toward greater lateralization of connectivity with age between the 20 hubs included in the analysis, but most individual connections did not show significant age-related changes in lateralization. The weak changes in lateralization with age should be interpreted with caution because the correlations included >1000 data points, so very subtle differences may be observed that are not associated with behavioral or cognitive differences. Prior reports with smaller sample sizes have reported differences in lateralization during adolescence in prefrontal cortex [54] as well as decreased structural asymmetry with age over a similar age range [55].

Similarly, we saw no differences in functional lateralization with gender. These results differ from prior studies in which significant gender differences in functional connectivity lateralization were reported [16], [17]. This may be due to differing methods between the two studies, including the use of short-range connectivity in one of the former reports and correction for structural asymmetries in this report. A prior study performing graph-theoretical analysis of resting state functional connectivity data using a predefined parcellation of the brain also found no significant effects of hemispheric asymmetry with gender, but reported that males tended to be more locally efficient in their right hemispheres and females tended to be more locally efficient in their left hemispheres [56].

It is intriguing that two hubs of both the left-lateralized and right-lateralized network are nearly homotopic. Maximal left-lateralization in Broca Area corresponds to a similar right-lateralized homotopic cluster extending to include the anterior insula in the salience network. Although both networks have bilateral homologues in the inferior frontal gyrus/anterior insular region, it is possible that the relative boundaries of Broca Homologue on the right and the frontoinsular salience region may “compete” for adjacent brain cortical function. Future studies in populations characterized for personality traits [57] or language function may be informative as to whether local connectivity differences in these regions are reflected in behavioral traits or abilities. The study is limited by the lack of behavioral data and subject ascertainment available in the subject sample. In particular, source data regarding handedness is lacking. However, none of the hubs in our left- and right- lateralized networks involve primary motor or sensory cortices and none of the lateralized connections showed significant correlation with metrics of handedness in subjects for whom data was available.

Despite the need for further study of the relationship between behavior and lateralized connectivity, we demonstrate that left- and right-lateralized networks are homogeneously stronger among a constellation of hubs in the left and right hemispheres, but that such connections do not result in a subject-specific global brain lateralization difference that favors one network over the other (i.e. left-brained or right-brained). Rather, lateralized brain networks appear to show local correlation across subjects with only weak changes from childhood into early adulthood and very small if any differences with gender.