Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Jul
1st
Mon
permalink

Why It’s Good To Be Wrong. David Deutsch on Fallibilism

image

"That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. (…)

The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.

Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?

What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?

When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right? (…)

The fact is, there’s nothing infallible about “direct experience” (…). Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to

I’ll tell you what really happened. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was. 

And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.

You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience. (…)

It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say. 

A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love. (…)

This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”

Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Poppe elaborated on many of these. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place. (…)

Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.

“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.

Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.” (…)

The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.

I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.

And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?”

David Deutsch, a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford, Why It’s Good To Be Wrong, Nautilus, 2013. (Illustration by Gérard DuBois)

See also:

David Deutsch: A new way to explain explanation, TED, 2009
David Deutsch on knowledge as crafted self-similarity
David Deutsch on Artificial Intelligence

Dec
10th
Mon
permalink

Cargo cult science by Richard Feynman
  image

Adapted from the Caltech commencement address given in 1974.

"During the Middle Ages there were all kinds of crazy ideas, such as that a piece of rhinoceros horn would increase potency. Then a method was discovered for separating the ideas—which was to try one to see if it worked, and if it didn’t work, to eliminate it. This method became organized, of course, into science. And it developed very well, so that we are now in the scientific age. It is such a scientific age, in fact that we have difficulty in understanding how witch doctors could ever have existed, when nothing that they proposed ever really worked—or very little of it did.
 
But even today I meet lots of people who sooner or later get me into a conversation about UFOS, or astrology, or some form of mysticism, expanded consciousness, new types of awareness, ESP, and so forth. And I’ve concluded that it’s not a scientific world.
 
Most people believe so many wonderful things that I decided to investigate why they did. And what has been referred to as my curiosity for investigation has landed me in a difficulty where I found so much junk that I’m overwhelmed. First I started out by investigating various ideas of mysticism, and mystic experiences. I went into isolation tanks and got many hours of hallucinations, so I know something about that. Then I went to Esalen, which is a hotbed of this kind of thought (it’s a wonderful place; you should go visit there). Then I became overwhelmed. I didn’t realize how much there was.
 
At Esalen there are some large baths fed by hot springs situated on a ledge about thirty feet above the ocean. One of my most pleasurable experiences has been to sit in one of those baths and watch the waves crashing onto the rocky shore below, to gaze into the clear blue sky above, and to study a beautiful nude as she quietly appears and settles into the bath with me.
 
One time I sat down in a bath where there was a beautiful girl sitting with a guy who didn’t seem to know her. Right away I began thinking, “Gee! How am I gonna get started talking to this beautiful nude babe?”
 
I’m trying to figure out what to say, when the guy says to her, I’m, uh, studying massage. Could I practice on you?”
 
"Sure," she says. They get out of the bath and she lies down on a massage table nearby.
 
I think to myself, “What a nifty line! I can never think of anything like that!” He starts to rub her big toe. “I think I feel it, “he says. “I feel a kind of dent—is that the pituitary?”
 
I blurt out, “You’re a helluva long way from the pituitary, man!”
 
They looked at me, horrified—I had blown my cover—and said, “It’s reflexology!”
 
I quickly closed my eyes and appeared to be meditating.
 
That’s just an example of the kind of things that overwhelm me. I also looked into extrasensory perception and PSI phenomena, and the latest craze there was Uri Geller, a man who is supposed to be able to bend keys by rubbing them with his finger. So I went to his hotel room, on his invitation, to see a demonstration of both mindreading and bending keys. He didn’t do any mindreading that succeeded; nobody can read my mind, I guess. And my boy held a key and Geller rubbed it, and nothing happened. Then he told us it works better under water, and so you can picture all of us standing in the bathroom with the water turned on and the key under it, and him rubbing the key with his finger. Nothing happened. So I was unable to investigate that phenomenon.
 
But then I began to think, what else is there that we believe? (And I thought then about the witch doctors, and how easy it would have been to cheek on them by noticing that nothing really worked.) So I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down—or hardly going up in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. It ought to be looked into; how do they know that their method should work? Another example is how to treat criminals. We obviously have made no progress—lots of theory, but no progress— in decreasing the amount of crime by the method that we use to handle criminals.
 
Yet these things are said to be scientific. We study them. And I think ordinary people with commonsense ideas are intimidated by this pseudoscience. A teacher who has some good idea of how to teach her children to read is forced by the school system to do it some other way—or is even fooled by the school system into thinking that her method is not necessarily a good one. Or a parent of bad boys, after disciplining them in one way or another, feels guilty for the rest of her life because she didn’t do “the right thing,” according to the experts.
 
So we really ought to look into theories that don’t work, and science that isn’t science.
 
I think the educational and psychological studies I mentioned are examples of what I would like to call cargo cult science. In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.
 
Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
 
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
 
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
 
The easiest way to explain this idea is to contrast it, for example, with advertising. Last night I heard that Wesson oil doesn’t soak through food. Well, that’s true. It’s not dishonest; but the thing I’m talking about is not just a matter of not being dishonest, it’s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated at another temperature, they all will— including Wesson oil. So it’s the implication which has been conveyed, not the fact, which is true, and the difference is what we have to deal with.
 
We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.
 
A great deal of their difficulty is, of course, the difficulty of the subject and the inapplicability of the scientific method to the subject. Nevertheless it should be remarked that this is not the only difficulty. That’s why the planes didn’t land—but they don’t land.
 
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
 
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
 
But this long history of learning how not to fool ourselves—of having utter scientific integrity—is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.
 
The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.
 
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
 
For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.
 
One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish both kinds of results.
 
I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish it at all. That’s not giving scientific advice.
 
Other kinds of errors are more characteristic of poor science. When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this—it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.
 
I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person—to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know that the real difference was the thing she thought she had under control.
 
She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happens.
 
Nowadays there’s a certain danger of the same thing happening, even in the famous (?) field of physics. I was shocked to hear of an experiment done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen with light hydrogen” he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus. When asked why, he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result. And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they are destroying—possibly—the value of the experiments themselves, which is the whole purpose of the thing. It is often hard for the experimenters there to complete their work as their scientific integrity demands.
 
All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.
 
The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.
 
He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
 
Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running.
 
I looked into the subsequent history of this research. The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of cargo cult science.
 
Another example is the ESP experiments of Mr. Rhine, and other people. As various people have made criticisms—and they themselves have made criticisms of their own experiments—they improve the techniques so that the effects are smaller, and smaller, and smaller until they gradually disappear. All the parapsychologists are looking for some experiment that can be repeated—that you can do again and get the same effect—statistically, even. They run a million rats no, it’s people this time they do a lot of things and get a certain statistical effect. Next time they try it they don’t get it any more. And now you find a man saying that it is an irrelevant demand to expect a repeatable experiment. This is science?
 
This man also speaks about a new institution, in a talk in which he was resigning as Director of the Institute of Parapsychology. And, in telling people what to do next, he says that one of the things they have to do is be sure they only train students who have shown their ability to get PSI results to an acceptable extent— not to waste their time on those ambitious and interested students who get only chance results. It is very dangerous to have such a policy in teaching—to teach students only how to get certain results, rather than how to do an experiment with scientific integrity.
 
So I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.”     

   image

Richard Feynman, American theoretical physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics (he proposed the parton model), Laureate of the Nobel Prize in Physics, (1918-1988), Cargo cult science, Caltech commencement address given in 1974. (Pictures source: 1) Scientific American, 2) Richard Feynman at Caltech giving his famous lecture he entitled "There’s Plenty of Room at the Bottom." (credit: California Institute of Technology))

See also:

Richard Feynman on how we would look for a new law (the key to science)
Richard Feynman on the way nature work: “You don’t like it? Go somewhere else!”
Richard Feynman on the likelihood of Flying Saucers
Richard Feynman tag on Lapidarium

Nov
7th
Mon
permalink

How Epicurus’ ideas survived through Lucretius’ poetry, and led to toleration

       image
                                    Illustration:  Oxford: Anthony Stephens, 1683

Hunc igitur terrorem animi tenebrasque necessest
non radii solis neque lucida tela diei
discutiant, sed naturae species ratioque.

"Therefore it is necessary that neither the rays of the sun nor the shining spears of Day should shatter this terror and darkness of the mind, but the aspect and reason of nature."

— Lucretius, De Rerum Natura (On the Nature of Things), Book I, line 90-93.

As Greenblatt describes it, Lucretius (borrowing from Democritus and others), says [more than 2,000 years ago] the universe is made of an infinite number of atoms:

"Moving randomly through space, like dust motes in a sunbeam, colliding, hooking together, forming complex structures, breaking apart again, in a ceaseless process of creation and destruction. There is no escape from this process. (…) There is no master plan, no divine architect, no intelligent design.

All things, including the species to which you belong, have evolved over vast stretches of time. The evolution is random, though in the case of living organisms, it involves a principle of natural selection. That is, species that are suited to survive and to reproduce successfully, endure, at least for a time; those that are not so well suited, die off quickly. But nothing — from our own species, to the planet on which we live, to the sun that lights our day — lasts forever. Only the atoms are immortal.”

— cited in Lucretius, Man Of Modern Mystery, NPR, Sep 19, 2011

””On the Nature of Things,” a poem written 2,000 years ago that flouted many mainstream concepts, helped the Western world to ease into modernity. (…)

Harvard literary scholar Stephen Greenblatt has proposed a sort of metaphor for how the world became modern. An ancient Roman poem, lost for 1,000 years, was recovered in 1417. Its presciently modern ideas — that the world is made of atoms, that there is no life after death, and that there is no purpose to creation beyond pleasure — dropped like an atomic bomb on the fixedly Christian culture of Western Europe.

But this poem’s radical and transformative ideas survived what could have been a full-blown campaign against it, said Greenblatt. (…) One reason is that it was art. A tract would have drawn the critical attention of the authorities, who during the Renaissance still hewed to Augustine’s notion that Christian beliefs were “unshakeable, unchangeable, coherent.”

The ancient poem that contained such explosive ideas, and that packaged them so pleasingly, was “On the Nature of Things” (“De Rerum Natura”) by Roman poet and philosopher Titus Lucretius Carus, who died five decades before the start of the Christian era. Its intent was to counter the fear of death and the fear of the supernatural. Lucretius rendered into poetry the ideas of Epicurus, a Greek philosopher who had died some 200 years earlier. Both men embraced a core idea: that life was about the pursuit of pleasure and the avoidance of pain. (…)

Among the most stunning ideas Lucretius promoted in his poem was that the world is made of atoms, imperishable bits of matter he called “seeds.” All the rest was void — nothingness. Atoms never disappeared, but were material grist for the world’s ceaseless change, without any creator or design or afterlife.

These ideas, “drawn from a defunct pagan past,” were intolerable in 15th-century Europe, said Greenblatt, so much so that for the next 200 years they had to survive every “formal and informal mechanism of aversion and repression” of the age.

“A few wild exceptions” embraced this pagan past explicitly, said Greenblatt, including Dominican friar Giordano Bruno, whose “fatal public advocacy” of Lucretius came to an end in 1600. Branded a pantheist, he was imprisoned, tortured, and burned at the stake.

But the poem itself, a repository of intolerable ideas, was allowed to circulate. How was this so?

Greenblatt offered three explicit reasons:

Reading strategies. In the spirit of commonplace books, readers of that era focused on individual passages rather than larger (and disturbing) meanings. Readers preferred to see the poem as a primer on Latin and Greek grammar, philology, natural history, and Roman culture.

— Scholarship. Official commentaries on the text were not intended to revive the radical ideas of Lucretius, but to put the language and imagery of a “dead work” in context, “a homeostatic survival,” said Greenblatt, “to make the corpse accessible.” He showed an image from a 1511 scholarly edition of the poem, in which single lines on each page lay “like a cadaver on a table,” surrounded by elaborate scholarly text. But the result was still preservation. “Scholarship,” he said, “is rarely credited properly in the history of toleration.”

— Aesthetics. A 1563 annotated edition of the poem acknowledged that its precepts were alien to Christian belief, but “it is no less a poem.”

“Certainly almost every one of the key principles was an offense to right-thinking Christians,” said Greenblatt. “But the poetry was compellingly, stunningly beautiful.”

Its “immensely seductive form,” he said — the soul of tolerance — helped to make aesthetics the concept that bridged the gap between the Renaissance and the early modern age.

Michel de Montaigne, the 16th-century French nobleman who invented the art of the essay, helped to maintain that aesthetic thread. His work includes almost 100 quotations from Lucretius. It was explicitly aesthetic appreciation of the old Roman, said Greenblatt, despite Montaigne’s own “genial willingness to submit to Christian orthodoxy.”

In the end, Lucretius and the ideas he borrowed from Epicurus survived because of art. “That aesthetic dimension of the ancient work (…) was the key element in the survival and transmission of what was perceived (…) by virtually everyone in the world to be intolerable,” said Greenblatt. “The thought police were only rarely called in to investigate works of art.”

One irony abides. Epicurus himself was known to say, “I spit on poetry,” yet his ideas only survive because of it. Lucretius saw his art as “honey smeared around the lip of a cup,” said Greenblatt, “that would enable readers to drink it down.”

The Roman poet thought there was no creator or afterlife, but that “should not bring with it a cold emptiness,” said Greenblatt. “It shouldn’t be only the priests of the world, with their delusions, who could convey to you that feeling of the deepest wonder.””

— Corydon Ireland, Through artistry, toleration, Harvard Gazette, Oct 31, 2011

See also:

☞ Lucretius, On the Nature of Things (1st century B.C.), History of Science Online

"In De rerum natura (On the Nature of Things), the poet Lucretius (ca. 50 BC) fused the atomic theory of Democritus and Leucippus with the philosophy of Epicurus in order to argue against the existence of the gods. While ordinary humans might fear the thunderbolts of Jove or torments in the underworld after death, Lucretius advised his readers to take courage in the knowledge that death is merely a dissolution of the body, as atoms combine and reassemble according to chance as they move through the void. Against the Stoics, Aristotelians, and Neoplatonists, Lucretius argued for a mechanistic universe governed by chance. He also argued for a plurality of worlds (and these planets, like the Earth, need not be spherical) and a non-hierarchical universe. Despite the paucity of ancient readers persuaded by Lucretius’ arguments, his work was almost universally admired as a masterful example of Latin style.”

Titus Lucretius Carus (ca. 99 BCE – ca. 55 BCE) was a Roman poet and philosopher.

See also:

Stephen Greenblatt, The Answer Man, The New Yorker, Aug 8, 2011
Lucretius, Man Of Modern Mystery, NPR, Sep 19, 2011
☞ Christian Flow, Swerves, Harvard Magazine Jul-Aug 2011
Lucretius on the infinite universe, the beginning of things and the likelihood of extraterrestrial life, Lapidarium
Lucretius: ‘O unhappy race of men, when they ascribed actions to the gods’, Lapidarium

Jul
3rd
Sun
permalink

George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’

    

Metaphor is a fundamental mechanism of mind, one that allows us to use what we know about our physical and social experience to provide understanding of countless other subjects. Because such metaphors structure our most basic understandings of our experience, they are “metaphors we live by”—metaphors that can shape our perceptions and actions without our ever noticing them. (…)

We are neural beings, (…) our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything – only what our embodied brains permit. (…)

The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.”

George Lakoff, cited in Daniel Lende, Brainy Trees, Metaphorical Forests: On Neuroscience, Embodiment, and Architecture, Neuroanthropology, Jan 10, 2012.

"For Lakoff, language is not a neutral system of communication, because it is always based on frames, conceptual metaphors, narratives, and emotions. Political thought and language is inherently moral and emotional. (…)

The way people really reason — Real Rationality — coming new understandings of the brain—something that up-to-date marketers have already done. Enlightenment reason, we now know, was a false theory of rationality.

Most thought is unconscious. It doesn’t work by mathematical logic. You can’t reason directly about the world—because you can only conceptual what your brain and body allow, and because ideas are structured using frames.” Lakoff says. “As Charles Fillmore has shown, all words are defined in terms of conceptual frames, not in terms of some putative objective, mind-free world.”

“People really reason using the logic of frames, metaphors, and narratives, and real decision making requires emotion, as Antonio Damasio showed in Descartes’ Error.” 

“A lot of reason does not serve self interest, but is rather about empathizing with and connecting to others.”

People Don’t Decide Using ‘Just the Facts’

Contemporary explanatory journalism, in particular, is prone to the false belief that if the facts are presented to people clearly enough, they will accept and act upon them, Lakoff says. “In the ‘marketplace of ideas’ theory,  that the best factually based logical argument will always win. But this doesn’t actually happen.”

“Journalists always wonder, ‘We’ve reported on all the arguments, why do people vote wrong?’” Lakoff says. “They’ve missed the main event.”

Many journalists think that “framing” a story or issue is “just about choices of words and manipulation,” and that one can report factually and neutrally without framing. But language itself isn’t neutral. If you study the way the brain processes language, Lakoff says, “every word is defined with respect to frames. You’re framing all the time.” Morality and emotion are already embedded in the way people think and the way people perceive certain words—and most of this processing happens unconsciously. “You can only learn things that fit in with what your brain will allow,” Lakoff says.

A recent example? The unhappy phrase “public option.”

“When you say public, it means ‘government’ to conservatives,” Lakoff explains. “When you say ‘option,’ it means two things: it’s not necessary, it’s just an ‘option,’ and secondly it’s a public policy term, a bureaucratic term. To conservatives, ‘public option’ means government bureaucracy, the very worst thing you could have named this. They could have called it the America Plan. They could have called it doctor-patient care.”

According to Lakoff, because of the conservative success in shaping public discourse through their elaborate communication system, the most commonly used words often have been given a conservative meaning. “Tax relief,” for example, suggests that taxation is an affliction to be relieved.

Don’t Repeat the Language Politicians Use: Decode It

Instead of simply adopting the language politicians use to frame an issue, Lakoff argues, journalists need to analyze the language political figures use and explain the moral content of particular words and arguments.

That means, for example, not just quoting a politician about whether a certain policy infringes or supports American “liberty,” but explaining what he or she means by “liberty,” how this conception of liberty fits into the politician’s overall moral outlook, and how it contrasts with other conceptions of liberty.

It also means spelling out the full implications of the metaphors politicians choose. In the recent coverage of health care reform, Lakoff says, one of the “hidden metaphors” that needed to be explored was whether politicians we’re talking about healthcare as a commodity or as a necessity and a right.

Back on the 2007 presidential campaign trail, Lakoff pointed out, Rudy Giuliani called Obama’s health care plans “socialist,” while he himself compared buying health care to buying a flatscreen tv set, using the metaphor of health care as a commodity, not a necessity. A few liberal bloggers were outraged, but several newspapers reported his use of the metaphor without comment or analysis, rather than exploring what it revealed about Giuliani’s worldview. (…)

A Dictionary of the Real Meanings of Words

What would a nonpartisan explanatory journalism be like? To make nonpartisan decoding easier, Lakoff thinks journalists should create an online dictionary of the different meanings of words—“ not just a glossary, but a little Wikipedia-like website,” as he puts it. This site would have entries to explain the differences between the moral frameworks of conservatives and progressives, and what they each typically mean when they say words like “freedom.” Journalists across the country could link to the site whenever they sensed a contested word.

A project like this would generate plenty of resistance, Lakoff acknowledges. “What that says is most people don’t know what they think. That’s extremely scary…the public doesn’t want to be told, ‘You don’t know what you think.’” The fact is that about 98 percent of thought is unconscious.”

But, he says, people are also grateful when they’re told what’s really going on, and why political figures reason as they do. He would like to see a weekly column in the New York Times and other newspapers decoding language and framing, and analyzing what can and cannot be said politically, and he’d also like to see cognitive science and the study of framing added to journalism school curricula.

Ditch Objectivity, Balance, and ‘The Center ‘

Lakoff has two further sets of advice for improving explanatory journalism. The first is to ditch journalism’s emphasis on balance. Global warming and evolution are real. Unscientific views are not needed for “balance.”

“The idea that truth is balanced, that objectivity is balanced, is just wrong,” Lakoff says. Objectivity is a valuable ideal when it means unbiased reporting, Lakoff argues. But too often, the need for objectivity means that journalists hide their own judgments of an issue behind “public opinion.” The journalistic tradition of “always having to get a quote from somebody else” when the truth is obvious is foolish, Lakoff says.

So is the naïve reporting of poll data, since poll results can change drastically depending on the language and the framing of the questions. The framng of the questions should be part of reporting on polls.

Finally, Lakoff’s research suggests that many Americans, perhaps 20 per cent, are “biconceptuals” who have both conservative and liberal moral systems in their brains, but apply them to different issues. In some cases they can switch from one ideological position to another, based on the way an issue is framed. These biconceptuals occupy the territory that’s usually labeled “centrist.” “There isn’t such a thing as ‘the center.’ There are just people who are conservative on some issues and liberal on others, with lots of variations occurring. Journalists accept the idea of a “center” with its own ideology, and that’s just not the case,” he says.

Journalists tell “stories.” Those stories are often narratives framed from a particular moral or political perspective. Journalists need to be more upfront about the moral and political underpinnings of the stories they write and the angles they choose.

Journalism Isn’t Neutral–It’s Based on Empathy

“Democracy is based on empathy, with people not just caring, but acting on that care —having social as well as personal responsibility…That’s a view that many journalists have. That’s the reason they become journalists rather than stockbrokers. They have a certain view of democracy. That’s why a lot of journalists are liberals. They actually care about how politics can hurt people, about the social causes of harm. That’s a really different view than the conservative view: if you get hurt and you haven’t taken personal responsibility, then you deserve to get hurt—as when you sign on to a mortgage you can’t pay. Investigative journalism is very much an ethical enterprise, and I think journalists have to ask themselves, ‘What is the ethics behind the enterprise?’ and not be ashamed of it.” Good investigative journalism uncovers real facts, but is done, and should be done, with a moral purpose.

To make a moral story look objective, “journalists tend to pin moral reactions on other people: ‘I’m going to find someone around here who thinks it’s outrageous’…This can make outrageous moral action into a matter of public opinion rather than ethics.”

In some ways, Lakoff’s suggestions were in line with the kind of journalism that one of our partners,  the non-profit investigative journalism outlet ProPublica, already does. In its mission statement, ProPublica, makes its commitment to “moral force” explicit. “Our work focuses exclusively on truly important stories, stories with ‘moral force,’” the statement reads. “We do this by producing journalism that shines a light on exploitation of the weak by the strong and on the failures of those with power to vindicate the trust placed in them.”

He emphasized the importance of doing follow-ups to investigative stories, rather than letting the public become jaded by a constant succession of outrages that flare on the front page and then disappear. Most of ProPublica’s investigations are ongoing and continually updated on its site.

Cognitive Explanation:’ A Different Take on ProPublica’s Mission 

But Lakoff also had some very nontraditional suggestions about what it would mean for ProPublica to embark on a different kind of explanatory journalism project. “There are two different forms of explanatory journalism. One is material explanation — the kind of investigative reporting now done at ProPublica: who got paid what by whom, what actions resulted in harm, and so on. All crucial,” he noted. “But equally crucial, and not done, is cognitive and communicative explanation.”

“Cognitive explanation depends on what conceptual system lies behind political positions on issues and how the working of people’s brains explains their political behavior. For example, since every word of political discourse evokes a frame and the moral system behind it, the superior conservative communication system reaches most Americans 24/7/365. The more one hears conservative language and not liberal language, the more the brains of those listening get changed. Conservative communication with an absence of liberal communication exerts political pressure on Democrats whose constituents hear conservative language all day every day. Explanatory journalism should be reporting on the causal effects of conservative framing and the conservative communicative superiority.”

“ProPublica seems not to be explicit about conflicting views of what constitutes ‘moral force.’ ProPublica does not seem to be covering the biggest story in the country, the split over what constitutes morality in public policy. Nor is it clear that ProPublica studies the details of framing that permeate public discourse. Instead, ProPublica assumes a view of “moral force” in deciding what to cover and how to cover it.

“For example, ProPublica has not covered the difference in moral reasoning behind the conservative and progressive views on tax policy, health care, global warming and energy policy, and so on for major issue after major issue.

“ProPublica also is not covering a major problem in policy-making — the assumption of classical views of rationality and the ways they have been scientifically disproved in the cognitive and brain sciences.

“ProPublica has not reported on the disparity between the conservative and liberal communication systems, nor has it covered the globalization of conservatism — the international exportation of American conservative strategists, framing, training, and communication networks.

“When ProPublica uncovers facts about organ transplants and nursing qualifications, that’s fine. But where is ProPublica on the reasons for the schisms in our politics? Explanatory journalism demands another level of understanding.

“ProPublica, for all its many virtues, has room for improvement, in much the same way as journalism in general — especially in explanatory journalism. Cognitive and communicative explanation must be added to material explanation.”

What Works In the Brain: Narrative & Metaphor

As for creating Explanatory Journalism that resonates with the way people process information, Lakoff suggested two familiar tools: narrative and metaphor.

The trick to finding the right metaphors for complicated systems, he said, is to figure out what metaphors the experts themselves use in the way they think. “Complex policy is usually understood metaphorically by people in the field,” Lakoff says. What’s crucial is learning how to distinguish the useful frames from the distorting or overly-simplistic ones.

As for explaining policy, Lakoff says, “the problem with this is that policy is made in a way that is not understandable…Communication is always seen as last, as the tail on the dog, whereas if you have a policy that people don’t understand, you’re going to lose. What’s the point of trying to get support for a major health care reform if no one understands it?

One of the central problems with policy, Lakoff says, is that policy-makers tend to take their moral positions so much for granted that the policies they develop seem to them like the “merely practical” things to do.

Journalists need to restore the real context of policy, Lakoff says, by trying “to get people in the government and policy-makers in the think tanks to understand and talk about what the moral basis of their policy is, and to do this in terms that are understandable.”

George Lakoff, American cognitive linguist and professor of linguistics at the University of California, Berkeley, interviewed by Lois Beckett in Explain yourself: George Lakoff, cognitive linguist, explainer.net, 31 January, 2011 (Illustration source)

See also:

Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks, Lapidarium notes
☞ Metaphor tag on Lapidarium notes

Jun
20th
Mon
permalink

The Argumentative Theory: ‘Reason evolved to win arguments, not seek truth’

                    

"For centuries thinkers have assumed that the uniquely human capacity for reasoning has existed to let people reach beyond mere perception and reflex in the search for truth. Rationality allowed a solitary thinker to blaze a path to philosophical, moral and scientific enlightenment.

Now some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. (…)

The idea, labeled the argumentative theory of reasoning, is the brainchild of French cognitive social scientists, and it has stirred excited discussion (and appalled dissent) among philosophers, political scientists, educators and psychologists, some of whom say it offers profound insight into the way people think and behave. The Journal of Behavioral and Brain Sciences devoted its April issue to debates over the theory, with participants challenging everything from the definition of reason to the origins of verbal communication.

“Reasoning doesn’t have this function of helping us to get better beliefs and make better decisions,” said Hugo Mercier, who is a co-author of the journal article, with Dan Sperber. “It was a purely social phenomenon. It evolved to help us convince others and to be careful when others try to convince us.” Truth and accuracy were beside the point.

Indeed, Mr. Sperber, a member of the Jean-Nicod research institute in Paris, first developed a version of the theory in 2000 to explain why evolution did not make the manifold flaws in reasoning go the way of the prehensile tail and the four-legged stride. Looking at a large body of psychological research, Mr. Sperber wanted to figure out why people persisted in picking out evidence that supported their views and ignored the rest — what is known as confirmation bias — leading them to hold on to a belief doggedly in the face of overwhelming contrary evidence.

Other scholars have previously argued that reasoning and irrationality are both products of evolution. But they usually assume that the purpose of reasoning is to help an individual arrive at the truth, and that irrationality is a kink in that process, a sort of mental myopia. Gary F. Marcus, for example, a psychology professor at New York University and the author of “Kluge: The Haphazard Construction of the Human Mind,” says distortions in reasoning are unintended side effects of blind evolution. They are a result of the way that the brain, a Rube Goldberg mental contraption, processes memory. People are more likely to remember items they are familiar with, like their own beliefs, rather than those of others.

What is revolutionary about argumentative theory is that it presumes that since reason has a different purpose — to win over an opposing group — flawed reasoning is an adaptation in itself, useful for bolstering debating skills.

Mr. Mercier, a post-doctoral fellow at the University of Pennsylvania, contends that attempts to rid people of biases have failed because reasoning does exactly what it is supposed to do: help win an argument.

“People have been trying to reform something that works perfectly well,” he said, “as if they had decided that hands were made for walking and that everybody should be taught that.”

Think of the American judicial system, in which the prosecutors and defense lawyers each have a mission to construct the strongest possible argument. The belief is that this process will reveal the truth, just as the best idea will triumph in what John Stuart Mill called the “marketplace of ideas.” (…)

Patricia Cohen, writer, journalist, Reason Seen More as Weapon Than Path to Truth, The New York Times, June 14, 2011.

"Imagine, at some point in the past, two of our ancestors who can’t reason. They can’t argue with one another. And basically as soon as they disagree with one another, they’re stuck. They can’t try to convince one another. They are bound to keep not cooperating, for instance, because they can’t find a way to agree with each other. And that’s where reasoning becomes important.
                                 
We know that in the evolutionary history of our species, people collaborated a lot. They collaborated to hunt, they collaborated to gather food, and they collaborated to raise kids. And in order to be able to collaborate effectively, you have to communicate a lot. You have to tell other people what you want them to do, and you have to tell them how you feel about different things.
                                 
But then once people start to communicate, a host of new problems arise. The main problem posed by communication in an evolutionary context is that of deceiving interlocutors. When I am talking to you, if you accept everything I say then it’s going to be fairly easy for me to manipulate you into doing things that you shouldn’t be doing. And as a result, people have a whole suite of mechanisms that are called epistemic vigilance, which they use to evaluate what other people tell them.
                                 
If you tell me something that disagrees with what I already believe, my first reaction is going to be to reject what you’re telling me, because otherwise I could be vulnerable. But then you have a problem. If you tell me something that I disagree with, and I just reject your opinion, then maybe actually you were right and maybe I was wrong, and you have to find a way to convince me. This is where reasoning kicks in. You have an incentive to convince me, so you’re going to start using reasons, and I’m going to have to evaluate these reasons. That’s why we think reasoning evolved. (…)

We predicted that reasoning would work rather poorly when people reason on their own, and that is the case. We predicted that people would reason better when they reason in groups of people who disagree, and that is the case. We predicted that reasoning would have a confirmation bias, and that is the case. (…)

The starting point of our theory was this contrast between all the results showing that reasoning doesn’t work so well and the assumption that reasoning is supposed to help us make better decisions. But this assumption was not based on any evolutionary thinking, it was just an intuition that was probably cultural in the West, people think that reasoning is a great thing. (…)

That’s important to keep in mind is that reasoning is used in a very technical sense. And sometimes not only laymen, but philosophers, and sometimes psychologists tend to use “reasoning” in an overly broad way, in which basically reasoning can mean anything you do with your mind.

By contrast, the way we use the term “reasoning” is very specific. And we’re only referring to what reasoning is supposed to mean in the first place, when you’re actually processing reasons. Most of the decisions we make, most of the inferences we make, we make without processing reasons. (…) When you’re shopping for cereals at the supermarket, and you just grab a box of cereal not because you’ve reasoned through all the alternatives, but just because it’s the one you always buy. And you’re just doing the same thing. There is no reasoning involved in that decision. (…)

It’s only when you’re considering reasons, reasons to do something, reasons to believe, that you’re reasoning. If you’re just coming up with ideas without reasons for these ideas, then you’re using your intuitions.”

The Argumentative Theory. A Conversation with Hugo Mercier, Edge, 4.27.2011

"Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis.

Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. (…) p.1

Some of the evidence reviewed here shows not only that reasoning falls short of delivering rational beliefs and rational decisions reliably, but also that, in a variety of cases, it may even be detrimental to rationality. Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or their actions. The argumentative theory, however, puts such well-known demonstrations of “irrationality” in a novel perspective. Human reasoning is not a profoundly flawed general mechanism; it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels. (…)

People are good at assessing arguments and are quite able to do so in an unbiased way, provided they have no particular axe to grind. In group reasoning experiments where participants share an interest in discovering the right answer, it has been shown that truth wins. (…) p.58

What makes [Sherlock] Holmes such a fascinating character is precisely his preternatural turn of mind operating in a world rigged by Conan Doyle, where what should be inductive problems in fact have deductive solutions. More realistically, individuals may develop some limited ability to distance themselves from their own opinion, to consider alternatives and thereby become more objective. Presumably this is what the 10% or so of people who pass the standard Wason selection task do. But this is an acquired skill and involves exercising some imperfect control over a natural disposition that spontaneously pulls in a different direction. (…)” p. 60

Hugo Mercier, postdoc in the Philosophy, Politics and Economics program at the University of Pennsylvania, and Dan Sperber, French social and cognitive scientist, Why do humans reason? Arguments for an argumentative theory, (pdf) Cambridge University Press 2011, published in Behavioral and Brain Sciences (Illustration source)

See also:

☞ Dan Sperber, Hugo Mercier, Reasoning as a Social Competence (pdf), Collective Wisdom Landemore, H. and Elster, J. (Eds.)
☞ Hugo Mercier, On the Universality of Argumentative Reasoning, Journal of Cognition and Culture, Vol. 11, pp. 85–113, 2011

Jun
3rd
Fri
permalink

Why people believe in strange things

                    image

"Science is founded on the conviction that experience, effort, and reason are valid; magic on the belief that hope cannot fail nor desire deceive." Bronislaw Malinowski, Magic, Science, and Religion, 1948

Aristotle maintained that women have fewer teeth than men; although he was twice married, it never occurred to him to verify this statement by examining his wives’ mouths.”Bertrand Russell, British philosopher, logician, mathematician, historian, and social critic, (1872-1970), The Impact of Science on Society, 1952

"According to a 2009 Harris Poll of 2,303 adult Americans, when people are asked to “Please indicate for each one if you believe in it, or not,” the following results were revealing:

 82% believe in God
 76% believe in miracles
 75% believe in Heaven
 73% believe in Jesus is God or the Son of God
 72% believe in angels
 71% believe in survival of the soul after death
 70% believe in the resurrection of Jesus Christ
 61% believe in hell
 61% believe in the virgin birth (of Jesus)
 60% believe in the devil
 45% believe in Darwin’s Theory of Evolution
 42% believe in ghosts
 40% believe in creationism
 32% believe in UFOs
 26% believe in astrology
 23% believe in witches
 20% believe in reincarnation

More people believe in angels and the devil than believe in the theory of evolution.”
— GALLUP, Paranormal Beliefs Come (Super) Naturally to Some

See also:
Evolution, Creationism, Intelligent Design (Gallup statistics)
Evolution, the Muslim world & religious beliefs
(statistics), Discovery Magazine, 2009 

"Belief in pseudoscience, including astrology, extrasensory perception (ESP), and alien abductions, is relatively widespread and growing. For example, in response to the 2001 NSF survey, a sizable minority (41 percent) of the public said that astrology was at least somewhat scientific, and a solid majority (60 percent) agreed with the statement “some people possess psychic powers or ESP.” Gallup polls show substantial gains in almost every category of pseudoscience during the past decade. Such beliefs may sometimes be fueled by the media’s miscommunication of science and the scientific process."

— National Science Foundation. 2002. Science Indicators Biennial Report. The section on pseudoscience, “Science Fiction and Pseudoscience,” is in Chapter 7

"70% of Americans still do not understand the scientific process, defined in the NSF study as grasping probability, the experimental method, and hypothesis testing. (…)

Belief change comes from a combination of personal psychological readiness and a deeper social and cultural shift in the underlying zeitgeist of the times, which is affected in part by education, but is more the product of larger and harder-to-define political, economic, religious, and social changes.”

Michael Shermer, The Believing Brain, Times Books, 2011

Michael Shermer: The Believing Brain

"In The Believing Brain, Michael Shermer argues that "belief-dependent realism" makes it hard for any of us to have an objective view of the world (…)

Philosophers of science have long argued that our theories, or beliefs, are the lenses through which we see the world, making it difficult for us to access an objective reality.

So where do our beliefs come from? In The Believing Brain Shermer argues that they are derived from “patternicity”, our propensity to see patterns in noise, real or imagined; and “agenticity”, our tendency to attribute a mind and intentions to that pattern. These evolved skills - which saved our ancestors who assumed, say, a rustling in the bushes was a predator intending to eat them - are the same attributes that lead us to believe in ghosts, conspiracies and gods.

In fact, neuroimaging studies have shown that, at the level of the brain, belief in a virgin birth or a UFO is no different than belief that two plus two equals four or that Barack Obama is president of the US. “We can no more eliminate superstitious learning than we can eliminate all learning,” writes Shermer. "People believe weird things because of our evolved need to believe non-weird things." (…)

As for our quest for objective reality, Shermer argues that science is our greatest hope. By requiring replicable data and peer review, science, he says, is the only process of knowledge-gathering that can go beyond our individual lenses of belief.”

— Amanda Gefter writing about Michael Shermer, The prison of our beliefs and how to escape it, NewScientist, 1 June 2011.

Children are born with the ability to perceive cause-effect relations. Our brains are natural machines for piecing together events that may be related and for solving problems that require our attention. We can envision an ancient hominid from Africa chipping and grinding and shaping a rock into a sharp tool for carving up a large mammalian carcass. Or perhaps we can imagine the first individual who discovered that knocking flint would create a spark that would light a fire. The wheel, the lever, the bow and arrow, the plow—inventions intended to allow us to shape our environment rather than be shaped by it—started us down a path that led to our modern scientific and technological world.

On the most basic level, we must think to remain alive. To think is the most essential human characteristic. Over three centuries ago, the French mathematician and philosopher Rene Descartes, after one of the most thorough and skeptical purges in intellectual history, concluded that he knew one thing for certain: "Cogito ergo sum—I think therefore I am." But to be human is to think. To reverse Descartes, "Sum ergo cogito—I am therefore I think." (…)

Michael Shermer, American science writer, historian of science, Why People Believe Weird Things, Henry Holt and Company, New York, 2002, p. 23. 

Michael Shermer: Why people believe strange things | TED



Why do people see the Virgin Mary on cheese sandwiches or hear demonic lyrics in “Stairway to Heaven”? Using video, images and music, Michael Shermer explores these and other phenomena, including UFOs and alien sightings. He offers cognitive context: In the absence of sound science, incomplete information can combine with the power of suggestion (helping us hear those Satanic lyrics in Led Zeppelin). In fact, he says, humans tend to convince ourselves to believe: We overvalue the ‘hits’.”

Michael Shermer, American science writer, historian of science, Why people believe strange things, TED.com

Michael Shermer: The pattern behind self-deception | TED

In this video Michael Shermer says the human tendency to believe strange things — from alien abductions to dowsing rods — boils down to two of the brain’s most basic, hard-wired survival skills. He explains what they are, and how they get us into trouble. Michael Shermer debunks myths, superstitions and urban legends, and explains why we believe them. (TED.com, Feb 2010)

Why do we believe in God? We are religious because we are paranoid | Psychology Today

Error Management Theory suggests that, in your inference, you can make a “Type I” error of false positive or “Type II” error of false negative, and these two types of error carry vastly different consequences and costs. The cost of a false-positive error is that you become paranoid. You are always looking around and behind your back for predators and enemies that don’t exist. The cost of a false-negative error is that you are dead, being killed by a predator or an enemy when you least expect them. Obviously, it’s better to be paranoid than dead, so evolution should have designed a mind that overinfers personal, animate, and intentional forces even when none exist.

image

Different theorists call this innate human tendency to commit false-positive errors rather than false-negative errors (and as a consequence be a bit paranoid) “animistic bias” or “the agency-detector mechanism.” These theorists argue that the evolutionary origins of religious beliefs in supernatural forces may have come from such an innate cognitive bias to commit false-positive errors rather than false-negative errors, and thus overinfer personal, intentional, and animate forces behind otherwise perfectly natural phenomena. (…)

In this view, religiosity (the human capacity for belief in supernatural beings) is not an evolved tendency per se; after all, religion in itself is not adaptive. It is instead a byproduct of animistic bias or the agency-detector mechanism, the tendency to be paranoid, which is adaptive because it can save your life. Humans did not evolve to be religious; they evolved to be paranoid. And humans are religious because they are paranoid.”

Satoshi Kanazawaevolutionary psychologist at the London School of Economics, Why do we believe in God?, Psychology Today, March 28, 2008. (More). ☞ See also: Martie G. Haselton and David M. Buss, Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading, University of Texas at Austin (pdf)

A Cross-National Test of the Uncertainty Hypothesis of Religious Belief

"According to the uncertainty hypothesis, religion helps people cope psychologically with dangerous or unpredictable situations. Conversely, with greater control over the external environment due to economic development and technological advances, religious belief is predicted to decline (the existential security hypothesis). The author predicts that religious belief would decline in economically developed countries where there is greater existential security, including income security (income equality and redistribution via welfare states) and improved health.

These predictions are tested in regression analyses of 137 countries that partialed out the effects of Communism and Islamic religion both of which affect the incidence of reported nonbelief. Findings show that disbelief in God increased with economic development (measured by lower agricultural employment and third-level enrollment). Findings further show that disbelief also increased with income security (low Gini coefficient, high personal taxation tapping the welfare state) and with health security (low pathogen prevalence). Results show that religious belief declines as existential security increases, consistent with the uncertainty hypothesis.”

Nigel Barber, A Cross-National Test of the Uncertainty Hypothesis of Religious Belief, 2011

"As to the distribution of atheism in the world, a clear pattern can be discerned. In sub-Saharan Africa there is almost no atheism (2%). Belief in God declines in more developed countries and atheism is concentrated in Europe in countries such as Sweden (64% nonbelievers), Denmark (48%), France (44%) and Germany (42%). In contrast, the incidence of atheism in most sub-Saharan countries is below 1%. (…)

Anthropologist James Fraser proposed that scientific prediction and control of nature supplants religion as a means of controlling uncertainty in our lives. This hunch is supported by data showing that the more educated countries have higher levels of non belief and there are strong correlations between atheism and intelligence. (…)

It seems that people turn to religion as a salve for the difficulties and uncertainties of their lives. In social democracies, there is less fear and uncertainty about the future because social welfare programs provide a safety net and better health care means that fewer people can expect to die young. People who are less vulnerable to the hostile forces of nature feel more in control of their lives and less in need of religion. Hence my finding of belief in God being higher in countries with a heavy load of infectious diseases. (…)”

Nigel Barber, Ph.D. in Biopsychology from Hunter College, CUNY, and taught psychology at Bemidji State University and Birmingham Southern College, Why Atheism Will Replace Religion. With economic security, people abandon religion, Psychology Today, July 14, 2011

Why We Don’t Believe In Science

   image

"Gallup announced the results of their latest survey on Americans and evolution. The numbers were a stark blow to high-school science teachers everywhere: forty-six per cent of adults said they believed that “God created humans in their present form within the last 10,000 years.” Only fifteen per cent agreed with the statement that humans had evolved without the guidance of a divine power.

What’s most remarkable about these numbers is their stability: these percentages have remained virtually unchanged since Gallup began asking the question, thirty years ago. (…)

A new study in Cognition, led by Andrew Shtulman at Occidental College, helps explain the stubbornness of our ignorance. As Shtulman notes, people are not blank slates, eager to assimilate the latest experiments into their world view. Rather, we come equipped with all sorts of naïve intuitions about the world, many of which are untrue. For instance, people naturally believe that heat is a kind of substance, and that the sun revolves around the earth. And then there’s the irony of evolution: our views about our own development don’t seem to be evolving.
This means that science education is not simply a matter of learning new theories. Rather, it also requires that students unlearn their instincts, shedding false beliefs the way a snake sheds its old skin. (…)

As expected, it took students much longer to assess the veracity of true scientific statements that cut against our instincts. In every scientific category, from evolution to astronomy to thermodynamics, students paused before agreeing that the earth revolves around the sun, or that pressure produces heat, or that air is composed of matter. Although we know these things are true, we have to push back against our instincts, which leads to a measurable delay.

What’s surprising about these results is that even after we internalize a scientific concept—the vast majority of adults now acknowledge the Copernican truth that the earth is not the center of the universe—that primal belief lingers in the mind. We never fully unlearn our mistaken intuitions about the world. We just learn to ignore them.

Shtulman and colleagues summarize their findings:

When students learn scientific theories that conflict with earlier, naïve theories, what happens to the earlier theories? Our findings suggest that naïve theories are suppressed by scientific theories but not supplanted by them.
(…)

Until we understand why some people believe in science we will never understand why most people don’t.

In a 2003 study, Kevin Dunbar, a psychologist at the University of Maryland, showed undergraduates a few short videos of two different-sized balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time—a refutation of Aristotle, who claimed that heavier objects fell faster.

While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. They found the two balls falling at the same rate to be deeply unrealistic. (Intuitively, we’re all Aristotelians.)

Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: there was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the center of the brain. The A.C.C. is typically associated with the perception of errors and contradictions—neuroscientists often refer to it as part of the “Oh shit!” circuit—so it makes sense that it would be turned on when we watch a video of something that seems wrong, even if it’s right.

This data isn’t shocking; we already know that most undergrads lack a basic understanding of science. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to identify the error; they knew Galileo’s version was correct.

But it turned out that something interesting was happening inside their brains that allowed them to hold this belief. When they saw the scientifically correct video, blood flow increased to a part of the brain called the dorsolateral prefrontal cortex, or D.L.P.F.C. The D.L.P.F.C. is located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that aren’t helpful or useful. If you don’t want to think about the ice cream in the freezer, or need to focus on some tedious task, your D.L.P.F.C. is probably hard at work.

According to Dunbar, the reason the physics majors had to recruit the D.L.P.F.C. is because they were busy suppressing their intuitions, resisting the allure of Aristotle’s error. It would be so much more convenient if the laws of physics lined up with our naïve beliefs—or if evolution was wrong and living things didn’t evolve through random mutation. But reality is not a mirror; science is full of awkward facts. And this is why believing in the right version of things takes work.

Of course, that extra mental labor isn’t always pleasant. (There’s a reason they call it “cognitive dissonance.”) It took a few hundred years for the Copernican revolution to go mainstream. At the present rate, the Darwinian revolution, at least in America, will take just as long.”

Jonah Lehrer, Why We Don’t Believe In Science, The New Yorker, June 7, 2012. (Illustration courtesy of Hulton Archive/Getty Images.)

See also: ☞ A. Shtulman, J. Valcarcel , Scientific knowledge suppresses but does not supplant earlier intuitions (pdf), Department of Psychology, Occidental College, 2012.

[This note will be gradually expanded…]

See also:

☞ D. Kapogiannis, A. K. Barbey, M. Su, G. Zambon, F. Krueger, J. Grafman, Cognitive and neural foundations of religious belief, Washington University School of Medicine, 2009 
☞ D. Kapogiannis, A. K. Barbey, M. Su, F. Krueger, J. Grafman, Neuroanatomical Variability of Religiosity, National Institutes of Health, National Institute of Neurological Disorders and Stroke (NINDS), USA, Department of Psychology, Georgetown University, Washington, D. C., 2009
Is This Your Brain On God? (visualization), NPR
Andy Thomson, Why We Believe in Gods, Atlanta, Georgia 2009 (video lecture)
☞ Dr. Andy Thomson, "Why We Believe in God(s)", The Triangle Freethought Society, May 16th, 2011 (video lecture)
Jared Diamond, The Evolution of Religions, 2009 (video lecture)
Dan Dennett, A Darwinian Perspective on Religions: Past, Present and Future, (video lecture)
☞ Jesse Bering, We are programmed to believe in a god, Guardian, 4 January 2011 
☞ Michael Brooks, Born believers: How your brain creates God , New Scientist, 4 Feb 2009
Dan Ariely, We’re All Predictably Irrational, FORA.tv
The Believing Brain: Why Science Is the Only Way Out of Belief-Dependent Realism, Scientific American, July 5, 2011
'The Cognition, Religion and Theology Project' - Summary led by Dr Justin Barrett, from the Centre for Anthropology and Mind at Oxford University, trying to understand the underpinnings of religious thought and practice through application of the cognitive sciences, 2011
☞ Robert Bellah, The Roots of Religion. Where did religion come from? Robert Bellah ponders its evolutionary origins, Big Questions, Oct 3, 2011
☞ S. Pinker, Scott Atran and others on Where God and Science Meet. How Brain and Evolutionary Studies Alter Our Understanding of Religion Edited by Patrick McNamara (pdf)
Biologist E.O. Wilson on Why Humans, Like Ants, Need a Tribe, Newsweek Magazine, Apr 2, 2012
Pareidolia — a psychological phenomenon involving a vague and random stimulus (often an image or sound) being perceived as significant. Common examples include seeing images of animals or faces in clouds, the man in the moon or the Moon rabbit, and hearing hidden messages on records played in reverse. The word comes from the Greek para- – “beside”, “with”, or “alongside”—meaning, in this context, something faulty or wrong (as in paraphasia, disordered speech) and eidōlon – “image”; the diminutive of eidos – “image”, “form”, “shape”. Pareidolia is a type of apophenia. (Wiki)
Visions For All. People who report vivid religious experiences may hold clues to nonpsychotic hallucinations, Science News, Apr 7, 2012.
Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
The whys of religion vs. evolution. Evolutionary biologist Jerry Coyne examines why Americans often choose faith over scientific findings, Harvard University Gazette, May 8, 2012.
In U.S., 46% Hold Creationist View of Human Origins, GALLUP, June 1, 2012.
☞ Daisy Grewal, How Critical Thinkers Lose Their Faith in God, Scientific American, June 1, 2012.
Religion tag on Lapidarium

Jun
2nd
Thu
permalink

David Deutsch: A new way to explain explanation

For tens of thousands of years our ancestors understood the world through myths, and the pace of change was glacial. The rise of scientific understanding transformed the world within a few centuries. Why?

"Before the scientific revolution, they believed that everything important, knowable, was already known, enshrined in ancient writings, institutions, and in some genuinely useful rules of thumb — which were, however, entrenched as dogmas, along with many falsehoods. So they believed that knowledge came from authorities that actually knew very little. And therefore progress depended on learning how to reject the authority of learned men, priests, traditions and rulers. Which is why the scientific revolution had to have a wider context: the Enlightenment, a revolution in how people sought knowledge, trying not to rely on authority. "Take no one’s word for it." (…)

What creationist and empiricists both ignore is that, in that sense, no one has ever seen a bible either, that the eye only detects light, which we don’t perceive. Brains only detect nerve impulses. And they don’t perceive even those as what they really are, namely electrical crackles. So we perceive nothing as what it really is.

Our connection to reality is never just perception. It’s always, as Karl Popper put it, theory-laden. Scientific knowledge isn’t derived from anything. It’s like all knowledge. It’s conjectural, guesswork, tested by observation, not derived from it. So, were testable conjectures the great innovation that opened the intellectual prison gates? No. Contrary to what’s usually said, testability is common, in myths and all sorts of other irrational modes of thinking. Any crank claiming the sun will go out next Tuesday has got a testable prediction. (…)

This easy variability is the sign of a bad explanation. Because, without a functional reason to prefer one of countless variants, advocating one of them, in preference to the others, is irrational. So, for the essence of what makes the difference to enable progress, seek good explanations, the ones that can’t be easily varied, while still explaining the phenomena.

Now, our current explanation of seasons is that the Earth’s axis is tilted like that, so each hemisphere tilts toward the sun for half the year, and away for the other half. Better put that up. (Laughter) That’s a good explanation: hard to vary, because every detail plays a functional role. For instance, we know, independently of seasons, that surfaces tilted away from radiant heat are heated less, and that a spinning sphere, in space, points in a constant direction. And the tilt also explains the sun’s angle of elevation at different times of year, and predicts that the seasons will be out of phase in the two hemispheres. If they’d been observed in phase, the theory would have been refuted. But now, the fact that it’s also a good explanation, hard to vary, makes the crucial difference.

If the ancient Greeks had found out about seasons in Australia, they could have easily varied their myth to predict that. For instance, when Demeter is upset, she banishes heat from her vicinity, into the other hemisphere, where it makes summer. So, being proved wrong by observation, and changing their theory accordingly, still wouldn’t have got the ancient Greeks one jot closer to understanding seasons, because their explanation was bad: easy to vary. And it’s only when an explanation is good that it even matters whether it’s testable. If the axis-tilt theory had been refuted, its defenders would have had nowhere to go. No easily implemented change could make that tilt cause the same seasons in both hemispheres.

The search for hard-to-vary explanations is the origin of all progress. It’s the basic regulating principle of the Enlightenment. So, in science, two false aproaches blight progress. One is well known: untestable theories. But the more important one is explanationless theories. Whenever you’re told that some existing statistical trend will continue, but you aren’t given a hard-to-vary account of what causes that trend, you’re being told a wizard did it.

When you are told that carrots have human rights because they share half our genes — but not how gene percentages confer rights — wizard. When someone announces that the nature-nurture debate has been settled because there is evidence that a given percentage of our political opinions are genetically inherited, but they don’t explain how genes cause opinions, they’ve settled nothing. They are saying that our opinions are caused by wizards, and presumably so are their own. That the truth consists of hard to vary assertions about reality is the most important fact about the physical world. It’s a fact that is, itself, unseen, yet impossible to vary.

David Deutsch, Israeli-British physicist at the University of Oxford, David Deutsch: A new way to explain explanation, TED.com, July 2009 (tnx WildCat) (transcript)

See also:

David Deutsch on our place in the cosmos, (transcript), TED video

[14:23] “We can survive, and we can fail to survive. But it depends not on chance, but on whether we create the relevant knowledge in time. The danger is not at all unprecedented. Species go extinct all the time. Civilizations end. The overwhelming majority of all species and all civilizations that have ever existed are now history. And if we want to be the exception to that, then logically our only hope is to make use of the one feature that distinguishes our species, and our civilization, from all the others. Namely, our special relationship with the laws of physics. Our ability to create new explanations, new knowledge — to be a hub of existence. (…)

I’m a physicist, but I’m not the right kind of physicist. In regard to global warming, I’m just a layman. And the rational thing for a layman to do is to take seriously the prevailing scientific theory. And according to that theory, it’s already too late to avoid a disaster. Because if it’s true that our best option at the moment is to prevent CO2 emissions with something like the Kyoto Protocol, with its constraints on economic activity and its enormous cost of hundreds of billions of dollars or whatever it is, then that is already a disaster by any reasonable measure. (…)”

Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense
Why It’s Good To Be Wrong. David Deutsch on Fallibilism, Lapidarium notes

Mar
6th
Sun
permalink

Robert Burton and Jonah Lehrer on the certainty bias

           

“Once we realize that the brain has very powerful inbuilt involuntary mechanisms for assessing unconscious cognitive activity, it is easy to see how it can send into consciousness a message that we know something that we can’t presently recall—the modest tip-of-the-tongue feeling. At the other end of the spectrum would be the profound “feeling of knowing” that accompanies unconsciously held beliefs—a major component of the unshakeable attachment to fundamentalist beliefs—both religious and otherwise—such as belief in UFOs or false memories.

JONAH LEHRER: Why do you think that the feeling of certainty feels so good?

ROBERT BURTON: Stick brain electrodes in rat pleasure centers (the mesolimbic dopamine system primarily located in the upper brain stem). The rats continuously press the bar, to the exclusion of food and water, until they drop. In humans the same areas are activated with cocaine, amphetamines, alcohol, nicotine and gambling—to mention just a few behaviors to which one can become easily addicted. It is quite likely that the same reward system provides the positive feedback necessary for us to learn and to continue wanting to learn.

The pleasure of a thought is what propels us forward; imagine trying to write a novel or engage in a long-term scientific experiment without getting such rewards. Fortunately, the brain has provided us with a wide variety of subjective feelings of reward ranging from hunches, gut feelings, intuitions, suspicions that we are on the right track to a profound sense of certainty and utter conviction. And yes, these feelings are qualitatively as powerful as those involved in sex and gambling. One need only look at the self-satisfied smugness of a “know it all” to suspect that the feeling of certainty can approach the power of addiction. (…)

If a major brain function is to maintain mental homeostasis, it is understandable how stances of certainty can counteract anxiety and apprehension. Even though I know better, I find myself somewhat reassured (albeit temporarily) by absolute comments such as, “the stock market always recovers,” even when I realize that this may be only wishful thinking. (…)

LEHRER: How can people avoid the certainty bias?

BURTON: I don’t believe that we can avoid certainty bias, but we can mitigate its effect by becoming aware of how our mind assesses itself. As you may know from my book, I’ve taken strong exception to the popular notion that we can rely upon hunches and gut feelings as though they reflect the accuracy of a thought.

My hope is the converse; we need to recognize that the feelings of certainty and conviction are involuntary mental sensations, not logical conclusions. Intuitions, gut feelings and hunches are neither right nor wrong but tentative ideas that must then be submitted to empirical testing. If such testing isn’t possible (such as in deciding whether or not to pull out of Iraq), then we must accept that any absolute stance is merely a personal vision, not a statement of fact.

Perhaps one of my favorite examples of how certainty is often misleading is the great mathematician Srinivasava Ramanujan. At his death, his notebook was filled with theorems that he was certain were correct. Some were subsequently proven correct; others turned out to be dead wrong. Ramanujan’s lines of reasoning lead to correct and incorrect answers, but he couldn’t tell the difference. Only the resultant theorems were testable.

In short, please run, do not walk, to the nearest exit when you hear so-called leaders being certain of any particular policy. Only in the absence of certainty can we have open-mindedness, mental flexibility and willingness to contemplate alternative ideas.

Robert Burton, The Certainty Bias: A Potentially Dangerous Mental Flaw, Scientific American Oct 9, 2008

Jonah Lehrer:

"Why are people so eager for certainty? I think part of the answer is revealed in an interesting Science paper by Colin Camerer and colleagues. His experiment revolved around a decision making game known as the Ellsberg paradox. (…)

With less information to go on, the players exhibited substantially more activity in the amygdala and in the orbitofrontal cortex, which is believed to modulate activity in the amygdala. In other words, we filled in the gaps of our knowledge with fear.

I’d argue that it’s this subtle stab of fear that creates our bias for certainty. Not knowing makes us uneasy, and we always try to minimize such negative feelings. As a result, we pretend that we have better intelligence about Iraqi WMD than we actually do, or we make believe that the subprime debt being bought and sold on Wall Street is really safe. In other words, we selectively interpret the facts until the uncertainty is removed.

Camerer also tested patients with lesioned orbitofrontal cortices. (These patients are unable to generate and detect emotions.) Sure enough, because these patients couldn’t feel fear, their brains treated both decks equally. Their amygdalas weren’t excited by ambiguity, and didn’t lead them astray. Because of their debilitating brain injury, these patients behaved perfectly rationally. They exhibited no bias for certainty.

Obviously, it’s difficult to reduce something as amorphous as “uncertainty” to a few isolated brain regions. But I think Camerer is right to argue that his "data suggests a general neural circuit responding to degrees of uncertainty, contrary to decision theory."

Jonah Lehrer, The Certainty Bias, The Frontal Cortex, Oct 13, 2008

Jan
1st
Sat
permalink

Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames and Cultural Narratives



Notes: Metaphor and Embodiment

In Metaphors We Live By George Lakoff, a linguist, and Mark Johnson, a philosopher, suggest that metaphors not only make our thoughts more vivid and interesting but that they actually structure our perceptions and understanding.

"We are neural beings, (…) our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything - only what our embodied brains permit. (…) The Mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.”

Philosophy In The Flesh” - A talk with George Lakoff, EDGE 3rd Culture, 3.9.1999

"We think with our brains. There is no other choice. Thought is physical. Ideas and the concepts that make them up are physically “computed” by brain structures. Reasoning is the activation of certain neuronal groups in the brain given prior activation of other neuronal groups. Everything we know, we know by virtue of our brains. Our physical brains make possible our concepts and ideas; everything we can possibly think is made possible and greatly limited by the nature of our brains. (…)

Each neuron has connections to between 1,000 and 10,000 other neurons. (…) The flow of neural activity is a flow of ions that occurs across synapses – tiny gaps between neurons. Those synapses where there is a lot of activity are “strengthened” – both the transmitting and receiving side of active synapses become more efficient. Flow across the synapses is relatively slow compared to the speed of computers: about five one-thousandths of a second (5 milliseconds) per synapse. A word recognition task – Is the following word a word of English? – takes about half a second (500 milliseconds). This means that word recognition must be done in about 100 sequential steps. Since so much goes into word recognition, it is clear that much of the brain’s processing must be in parallel, not in sequence. This timing result also shows that well-learned tasks are carried out by direct connections. There is no intervening mentalese.”

— George Lakoff in Raymond W. Gibbs, Handbook of Metaphor and Thought, Chapter I: The Neural Theory of Metaphor, Cambridge University Press 2008, p. 18.

"Primary metaphorical thought arises when a neural circuit is formed linking two brain areas activated when experiences occur together repeatedly. Typically, one of the experiences is physical. In each experiment, each subject has the physical experience activating one of the brain regions and another experience (e.g., emotional or temporal) activating the other brain region for the given metaphor. The activation of both regions activates the metaphorical link. Thus, if the metaphor is Future Is Ahead and Past Is Behind, thinking about the future will activate the brain region for moving forward. If the metaphor is Affection is Warmth, holding warm coffee will activate the brain region for experiencing affection.”

George Lakoff, Why “Rational Reason” Doesn’t Work in Contemporary Politics, BuzzFlash.org, Feb 21, 2010

Metaphor as “imaginative rationality”

“Many of our activities (arguing, solving problems, budgeting time, etc.) are metaphorical in nature. The metaphorical concepts that characterize those activities structure our present reality. New metaphors have the power to create a new reality. This can begin to happen when we start to comprehend our experience in terms of a metaphor, and it becomes a deeper reality when we begin to act in terms of it. If a new metaphor enters the conceptual system that we base our actions on, it will alter that conceptual system and the perceptions and actions that the system gives rise to. Much of cultural change arises from the introduction of new metaphorical concepts and the loss of old ones. For example, the Westernization of cultures throughout the world is partly a matter of introducing the TIME IS MONEY metaphor into those cultures. (…)

It is reasonable enough to assume that words alone don’t change reality. But changes in our conceptual system do change what is real for us and affect how we perceive the world and act upon those perceptions.

The idea that metaphor is just a matter of language and can at best only describe reality stems from the view that what is real is wholly external to, and independent of, how human beings conceptualize the world—as if the study of reality were just the study of the physical world. Such a view of reality—so-called objective reality— leaves out human aspects of reality, in particular the real perceptions, conceptualizations, motivations, and actions that constitute most of what we experience. But the human aspects of reality are most of what matters to us, and these vary from culture to culture, since different cultures have different conceptual systems.

The reason we have focused so much on metaphor is that it unites reason and imagination. Reason, at the very least, involves categorization, entailment, and inference. Imagination, in one of its many aspects, involves seeing one kind of thing in terms of another kind of thing—what we have called metaphorical thought. Metaphor is thus imaginative rationality. Since the categories of our everyday thought are largely metaphorical and our everyday reasoning involves metaphorical entailments and inferences, ordinary rationality is therefore imaginative by its very nature. Given our understanding of poetic metaphor in terms of metaphorical entailments and inferences, we can see that the products of the poetic imagination are, for the same reason, partially rational in nature.

Metaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally: our feelings, aesthetic experiences, moral practices, and spiritual awareness. These endeavors of the imagination are not devoid of rationality; since they use metaphor, they employ an imaginative rationality.

An experientialist approach also allows us to bridge the gap between the objectivist and subjectivist myths about impartiality and the possibility of being fair and objective. (…) Truth is relative to understanding, which means that there is no absolute standpoint from which to obtain absolute objective truths about the world. This does not mean that there are no truths; it means only that truth is relative to our conceptual system, which is grounded in, and constantly tested by, our experiences and those of other members of our culture in our daily interactions with other people and with our physical and cultural environments.”

George Lakoff & Mark Johnson, Metaphors We Live By

See also:

George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’
James Geary, metaphorically speaking, TED.com, Dec 2009
☞ Paul H. Thibodeau, Lera Boroditsky, Metaphors We Think With: The Role of Metaphor in Reasoning, Department of Psychology, Stanford University, Stanford, California, USA
☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity, May, 2012

Feb
16th
Tue
permalink
“The dominant organ of sensory and social orientation in pre-alphabet societies was the ear – „hearing was believing.” The phonetic alphabet forced the magic world of the ear to yield to the neutral world of the eye. Man was given an eye for an ear.
Western history was shaped for some three thousand years by the introduction of the phonetic alphabet, a medium that depends solely on the eye for comprehension. The alphabet is a construct of fragmented bits and parts which have no semantic meaning in themselves, and which must be strung together in a line, bead-like, and a prescribed order. Its use fostered and encouraged the habit of perceiving all environment in visual and spatial terms- particularly in terms of a space and of a time that are uniform,

c-o-n-t-i-n-u-o-u-s
and
c-o-n-n-e-c-t-e-d.

The line, the continuum - this sentence is a prime example- become the organizing principle of life. “As we begin, so shall we go.” “Rationality” and logic came to depend on the presentation of connected and sequential facts or concepts.
For many people rationality has the connotation of uniformity and connectiveness. “I don’t follow you” means “I don’t think what you’re saying is rational.”
Visual space is uniform, continuous, and connected. The rational man in our Western culture is a visual man. The fact that most conscious experience has little “visuality” in it is lost on him. Rationality and visuality have long been interchangeable terms, but we do not live in a primarily visual world any more.
The fragmenting of activities, our habits of thinking in bits and parts – “specialism” – reflected the step-by-step linear departmentalizing process inherent in the technology of the alphabet.”
“The eye – it cannot choose but see; 
we cannot bid the ear be still; 
our bodies feel, where’er they be, 
agains or with our will.”
- Wordsworth
— Marshall McLuhan, The Medium is the Massage, Gingko Press, 2001 p. 44-45

“The dominant organ of sensory and social orientation in pre-alphabet societies was the ear – „hearing was believing.” The phonetic alphabet forced the magic world of the ear to yield to the neutral world of the eye. Man was given an eye for an ear.

Western history was shaped for some three thousand years by the introduction of the phonetic alphabet, a medium that depends solely on the eye for comprehension. The alphabet is a construct of fragmented bits and parts which have no semantic meaning in themselves, and which must be strung together in a line, bead-like, and a prescribed order. Its use fostered and encouraged the habit of perceiving all environment in visual and spatial terms- particularly in terms of a space and of a time that are uniform,

c-o-n-t-i-n-u-o-u-s

and

c-o-n-n-e-c-t-e-d.

The line, the continuum - this sentence is a prime example- become the organizing principle of life. “As we begin, so shall we go.” “Rationality” and logic came to depend on the presentation of connected and sequential facts or concepts.

For many people rationality has the connotation of uniformity and connectiveness. “I don’t follow you” means “I don’t think what you’re saying is rational.”

Visual space is uniform, continuous, and connected. The rational man in our Western culture is a visual man. The fact that most conscious experience has little “visuality” in it is lost on him. Rationality and visuality have long been interchangeable terms, but we do not live in a primarily visual world any more.

The fragmenting of activities, our habits of thinking in bits and parts – “specialism” – reflected the step-by-step linear departmentalizing process inherent in the technology of the alphabet.”

“The eye – it cannot choose but see;

we cannot bid the ear be still;

our bodies feel, where’er they be,

agains or with our will.”

- Wordsworth

Marshall McLuhan, The Medium is the Massage, Gingko Press, 2001 p. 44-45

Feb
8th
Mon
permalink
William Blake’s Newton (1795)
"Blake criticized Newton and like-minded philosophers such as Locke and Bacon for relying solely on reason.  Blake’s 1795 print "Newton" is a demonstration of his opposition to the "single-vision" of scientific materialism: the great philosopher-scientist is shown utterly isolated in the depths of the ocean, his eyes (only one of which is visible) fixed on the compasses with which he draws on a scroll. His concentration is so fierce that he seems almost to become part of the rocks upon which he sits."

William Blake’s Newton (1795)

"Blake criticized Newton and like-minded philosophers such as Locke and Bacon for relying solely on reason. Blake’s 1795 print "Newton" is a demonstration of his opposition to the "single-vision" of scientific materialism: the great philosopher-scientist is shown utterly isolated in the depths of the ocean, his eyes (only one of which is visible) fixed on the compasses with which he draws on a scroll. His concentration is so fierce that he seems almost to become part of the rocks upon which he sits."