Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Jul
1st
Mon
permalink

Why It’s Good To Be Wrong. David Deutsch on Fallibilism

image

"That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. (…)

The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.

Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?

What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?

When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right? (…)

The fact is, there’s nothing infallible about “direct experience” (…). Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to

I’ll tell you what really happened. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was. 

And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.

You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience. (…)

It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say. 

A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love. (…)

This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”

Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Poppe elaborated on many of these. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place. (…)

Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.

“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.

Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.” (…)

The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.

I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.

And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?”

David Deutsch, a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford, Why It’s Good To Be Wrong, Nautilus, 2013. (Illustration by Gérard DuBois)

See also:

David Deutsch: A new way to explain explanation, TED, 2009
David Deutsch on knowledge as crafted self-similarity
David Deutsch on Artificial Intelligence

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

Aug
10th
Fri
permalink

God and the Ivory Tower. What we don’t understand about religion just might kill us

                   
(Illustration: Medieval miniature painting of the Siege of Antioch (1490). The Crusades were a series of a military campaigns fought mainly between Christian Europe and Muslims. Shown here is a battle scene from the First Crusade.)

"The era of world struggle between the great secular ideological -isms that began with the French Revolution and lasted through the Cold War (republicanism, anarchism, socialism, fascism, communism, liberalism) is passing on to a religious stage. Across the Middle East and North Africa, religious movements are gaining social and political ground, with election victories by avowedly Islamic parties in Turkey, Palestine, Egypt, Tunisia, and Morocco. As Israel’s National Security Council chief, Gen. Yaakov Amidror (a religious man himself), told me on the eve of Tunisia’s elections last October, “We expect Islamist parties to soon dominate all governments in the region, from Afghanistan to Morocco, except for Israel.”

On a global scale, Protestant evangelical churches (together with Pentacostalists) continue to proliferate, especially in Latin America, but also keep pace with the expansion of fundamentalist Islam in southern Africa and eastern and southern Asia. In Russia, a clear majority of the population remains religious despite decades of forcibly imposed atheism. Even in China, where the government’s commission on atheism has the Sisyphean job of making that country religion-free, religious agitation is on the rise. And in the United States, a majority says it wants less religion in politics, but an equal majority still will not vote for an atheist as president.

But if reams of social scientific analysis have been produced on religion’s less celestial cousins — from the nature of perception and speech to how we rationalize and shop — faith is not a matter that rigorous science has taken seriously. To be sure, social scientists have long studied how religious practices correlate with a wide range of economic, social, and political issues. Yet, for nearly a century after Harvard University psychologist William James’s 1902 masterwork, The Varieties of Religious Experience, there was little serious investigation of the psychological structure or neurological and biological underpinnings of religious belief that determine how religion actually causes behavior. And that’s a problem if science aims to produce knowledge that improves the human condition, including a lessening of cultural conflict and war.

Religion molds a nation in which it thrives, sometimes producing solidarity and sacred causes so powerful that citizens are willing to kill or die for a common good (as when Judea’s Jews around the time of Christ persisted in rebellion unto political annihilation in the face of the Roman Empire’s overwhelmingly military might). But religion can also hinder a society’s ability to work out differences with others, especially if those others don’t understand what religion is all about. That’s the mess we find ourselves in today, not only among different groups of Americans in the so-called culture wars, but between secular and Judeo-Christian America and many Muslim countries.

Time and again, countries go to war without understanding the transcendent drives and dreams of adversaries who see a very different world. Yet we needn’t fly blindly into the storm.

Science can help us understand religion and the sacred just as it can help us understand the genome or the structure of the universe. This, in turn, can make policy better informed.

Fortunately, the last few years show progress in scientific studies of religion and the sacred, though headwinds remain strong. Across history and cultures, religion has often knit communities together under the rule of sentient, but immaterial deities — that is, spiritual beings whose description is logically contradictory and empirically unfalsifiable. Cross-cultural studies pioneered by anthropologist Pascal Boyer show that these miraculous features — talking bushes, horses that leap into the sky — make lasting impressions on people and thereby increase the likelihood that they will be passed down to the next generation. Implausibility also facilitates cultural transmission in a more subtle manner — fostering adaptability of religious beliefs by opening the door to multiple interpretations (as with metaphors or weekly sermons).

And the greater the investment in outlandishness, the better. This is because adherence to apparently absurd beliefs means incurring costs — surviving without electricity, for example, if you are Amish — which help identify members who are committed to the survival of a group and cannot be lured away. The ease of identifying true believers, in turn, builds trust and galvanizes group solidarity for common defense.

To test this hypothesis, anthropologist Richard Sosis and his colleagues studied 200 communes founded in the United States in the 19th century. If shared religious beliefs really did foster loyalty, they reasoned, then communes formed out of religious conviction should survive longer than those motivated by secular ideologies such as socialism. Their findings were striking: Just 6 percent of the secular communes were still functioning 20 years after their founding, compared with 39 percent of the religious communes.

It is not difficult to see why groups formed for purely rational reasons can be more vulnerable to collapse: Background conditions change, and it might make sense to abandon one group in favor of another. Interestingly, recent research echoes the findings of 14th-century historian Ibn Khaldun, who argued that long-term differences among North African Muslim dynasties with comparable military might “have their origin in religion … [and] group feeling [wherein] mutual cooperation and support flourish.” The more religious societies, he argued, endured the longest.

For this reason, even ostensibly secular countries and transnational movements usually contain important quasi-religious rituals and beliefs. Think of sacred songs and ceremonies, or postulations that “providence” or “nature” bestows equality and inalienable rights (though, for about 99.9 percent of our species’ existence, slavery, and oppression of minorities were more standard fare). These sacred values act as moral imperatives that inspire nonrational sacrifices in cooperative endeavors such as war.

Insurgents, revolutionaries, and terrorists all make use of this logic, generating outsized commitment that allows them to resist and often prevail against materially stronger foes. Consider the American revolutionaries who defied the greatest empire of their age by pledging “our Lives, our Fortunes and our sacred Honor” for the cause of “liberty or death.” Surely they were aware of how unlikely they were to succeed, given the vast disparities in material resources, manpower, and training. As Osama Hamdan, the ranking Hamas politburo member for external affairs, put it to me in Damascus, Syria, “George Washington was fighting the strongest military in the world, beyond all reason. That’s what we’re doing. Exactly.”

But the same logic that makes religious and sacred beliefs more likely to endure can make them impervious to compromise. Based on interviews, experiments, and surveys with Palestinians, Israelis, Indonesians, Indians, Afghans, and Iranians, my research with psychologists Jeremy Ginges, Douglas Medin, and others demonstrates that offering people material incentives (large amounts of money, guarantees for a life free of political violence) to compromise sacred values can backfire, increasing stated willingness to use violence. Such backfire effects occur both for convictions with clear religious investment (Jerusalem, sharia law) and for those that are at least initially nonreligious (Iran’s right to a nuclear capability, Palestinian refugees’ right of return).

According to a 2010 study, for example, most Iranians think there is nothing sacred about their government’s nuclear program. But for a sizable minority — 13 percent of the population — the quest for a nuclear capability (more focused on energy than weapons) had, through religious rhetoric, become a sacred subject. This group, which tends to be close to the regime, now believes a nuclear program is bound up with national identity and with Islam itself. As a result, offering material rewards or punishments to abandon the program only increases anger and support for it.

Although this sacralization of initially secular issues confounds standard “business-like” negotiation tactics, my work with political scientist Robert Axelrod interviewing political leaders in the Middle East and elsewhere indicates that strong symbolic gestures (sincere apologies, demonstrating respect for the other’s values) generate surprising flexibility, even among militants, and may enable subsequent material negotiations. Thus, we find that Palestinian leaders and their supporting populations are generally willing to accept Israeli offers of economic improvement only after issues of recognition are addressed. Even purely symbolic statements accompanied by no material action, such as “we recognize your suffering” or “we respect your rights in Jerusalem,” diminish support for violence, including suicide terrorism. This is particularly promising because symbolic gestures tied to religious notions that are open to interpretation might potentially be reframed without compromising their absolute “truth.” For example, Jerusalem might be reconceived less as a place than portal to heaven, where earthly access to the portal suffices.

If these things are worth knowing, why do scientists still shun religion?

Part of the reason is that most scientists are staunchly nonreligious. If you look at the prestigious U.S. National Academy of Sciences or Britain’s Royal Society, well over 90 percent of members are non-religious. That may help explain why some of the bestselling books by scientists about religion aren’t about the science of religion as much as the reasons that it’s no longer necessary to believe. “New Atheists” have aggressively sought to discredit religion as the chief cause of much human misery, militating for its demise. They contend that science has now answered questions about humans’ origins and place in the world that only religion sought to answer in the days before evolutionary science, and that humankind no longer needs the broken crutch of faith.

But the idea that we can simply argue away religion has little factual support. Although a recent study by psychologists Will Gervais and Ara Norenzayan indicates that people are less prone to think religiously when they think analytically, other studies suggest that seemingly contrary evidence rarely undermines religious belief, especially among groups welded by ritualized sacrifice in the face of outside threats. Norenzayan and others also find that belief in gods and miracles intensifies when people are primed with awareness of death or when facing danger, as in wartime.

Moreover, the chief complaint against religion — that it is history’s prime instigator of intergroup conflict — does not withstand scrutiny. Religious issues motivate only a small minority of recorded wars. The Encyclopedia of Wars surveyed 1,763 violent conflicts across history; only 123 (7 percent) were religious. A BBC-sponsored “God and War" audit, which evaluated major conflicts over 3,500 years and rated them on a 0-to-5 scale for religious motivation (Punic Wars = 0, Crusades = 5), found that more than 60 percent had no religious motivation. Less than 7 percent earned a rating greater than 3. There was little religious motivation for the internecine Russian and Chinese conflicts or the world wars responsible for history’s most lethal century of international bloodshed.

Indeed, inclusive concepts such as “humanity” arguably emerged with the rise of universal religions. Sociologist Rodney Stark reveals that early Christianity became the Roman Empire’s majority religion not through conquest, but through a social process grounded in trust. Repeated acts of altruism, such as caring for non-Christians during epidemics, facilitated the expansion of social networks that were invested in the religion. Likewise, studies by behavioral economist Joseph Henrich and colleagues on contemporary foragers, farmers, and herders show that professing a world religion is correlated with greater fairness toward passing strangers. This research helps explain what’s going on in sub-Saharan Africa, where Islam is spreading rapidly. In Rwanda, for example, people began converting to Islam in droves after Muslims systematically risked their lives to protect Christians and animists from genocide when few others cared.

Although surprisingly few wars are started by religions, once they start, religion — and the values it imposes — can play a critical role. When competing interests are framed in terms of religious and sacred values, conflict may persist for decades, even centuries. Disputes over otherwise mundane phenomena then become existential struggles, as when land becomes “Holy Land.” Secular issues become sacralized and nonnegotiable, regardless of material rewards or punishments. In a multiyear study, our research group found that Palestinian adolescents who perceived strong threats to their communities and were highly involved in religious ritual were most likely to see political issues, like the right of refugees to return to homes in Israel, as absolute moral imperatives. These individuals were thus opposed to compromise, regardless of the costs. It turns out there may be a neurological component to such behavior: Our work with Gregory Berns and his neuroeconomics team suggests that such values are processed in the brain as duties rather than utilitarian calculations; neuroimaging reveals that violations of sacred values trigger emotional responses consistent with sentiments of moral outrage.

Historical and experimental studies suggest that the more antagonistic a group’s neighborhood, the more tightly that group will cling to its sacred values and rituals. The result is enhanced solidarity, but also increased potential for conflict toward other groups. Investigation of 60 small-scale societies reveals that groups that experience the highest rates of conflict (warfare) endure the costliest rites (genital mutilation, scarification, etc.). Likewise, research in India, Mexico, Britain, Russia, and Indonesia indicates that greater participation in religious ritual in large-scale societies is associated with greater parochial altruism — that is, willingness to sacrifice for one’s own group, such as Muslims or Christians, but not for outsiders — and, in relevant contexts, support for suicide attacks. This dynamic is behind the paradoxical reality that the world finds itself in today: Modern global multiculturalism is increasingly challenged by fundamentalist movements aimed at reviving group loyalty through greater ritual commitments to ideological purity.

So why does it matter that we have moved past the -isms and into an era of greater religiosity? In an age where religious and sacred causes are resurgent, there is urgent need for scientific effort to understand them. Now that humankind has acquired through science the power to destroy itself with nuclear weapons, we cannot afford to let science ignore religion and the sacred, or let scientists simply try to reason them away. Policymakers should leverage scientific understanding of what makes religion so potent a force for both cooperation and conflict, to help increase the one and lessen the other.

Scott Atran, American and French anthropologist at France’s National Center for Scientific Research, the University of Michigan, John Jay College, and ARTIS Research who has studied violence and interviewed terrorists, God and the Ivory Tower, Foreign Policy, Aug 6, 2012.

See also:

Scott Atran on Why War Is Never Really Rational
‘We’ vs ‘Others’: Russell Jacoby on why we should fear our neighbors more than strangers
The Psychology of Violence (a modern rethink of the psychology of shame and honour in preventing it), Lapidarium notes
Religion tag on Lapidarium notes

May
17th
Thu
permalink

The Self Illusion: How the Brain Creates Identity

            

'The Self'

"For the majority of us the self is a very compulsive experience. I happen to think it’s an illusion and certainly the neuroscience seems to support that contention. Simply from the logical positions that it’s very difficult to, without avoiding some degree of infinite regress, to say a starting point, the trail of thought, just the fractionation of the mind, when we see this happening in neurological conditions. The famous split-brain studies showing that actually we’re not integrated entities inside our head, rather we’re the output of a multitude of unconscious processes.

I happen to think the self is a narrative, and I use the self and the division that was drawn by William James, which is the “I” (the experience of conscious self) and the “me” (which is personal identity, how you would describe yourself in terms of where are you from and everything that makes you up in your predilections and your wishes for the future). Both the “I”, who is sentient of the “me”, and the “me”, which is a story of who you are, I think are stories. They’re constructs and narratives. I mean that in a sense that a story is a reduction or at least it’s a coherent framework that has some causal kind of coherence.

When I go out and give public lectures I like to illustrate the weaknesses of the “I” by using visual illusions of the most common examples. But there are other kinds of illusions that you can introduce which just reveal to people how their conscious experience is actually really just a fraction of what’s really going on. It certainly is not a true reflection of all mechanisms that are generating. Visual illusions are very obvious in that. The thing about the visual illusion effects is that even when they’re explained to you, you can’t but help see them, so that’s interesting. You can’t divorce yourself from the mechanisms that are creating the illusion and the mind that’s experienced in the illusion.

The sense of personal identity, this is where we’ve been doing experimental work showing the importance that we place upon episodic memories, autobiographical memories. In our duplication studies for example, children are quite willing to accept that you could copy a hamster with all its physical properties that you can’t necessarily see, but what you can’t copy very easily are the episodic memories that one hamster has had.

This actually resonates with the ideas of John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. As the person loses the capacity to retrieve memories, or these memoires become distorted, then the identity of the person, the personality, can be changed, amongst other things. But certainly the memories are very important.

As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives.

The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us.       

I use the word illusion as opposed to delusion. Delusion implies mental illness, to some extent, and illusion, we’re quite happy to accept that we’re experiencing illusions, and for me the word illusion really does mean that it’s an experience that is not what it seems. I’m not denying that there is an experience. We all have this experience, and what’s more, you can’t escape it easily. I think it’s more acceptable to call it an illusion whereas there’s a derogatory nature of calling something a delusion. I suspect there’s probably a technical difference which ought to do with mental illness, but no, I think we’re all perfectly normally, experience this illusion.      

Oliver Sacks has famously written about various case studies of patients which seem so bizarre, people who have various forms of perceptual anomalies, they mistake their wife for a hat, or there are patients who can’t help but copy everything they see. I think that in many instances, because the self is so core to our normal behavior having an understanding that self is this constructive process, I think if this was something that clinicians were familiar with, then I think that would make a lot of sense.

Neuroethics

In fact, it’s not only in clinical practice, I think in a lot of things. I think neuroethics is a very interesting field. I’ve got another colleague, David Eagleman, he’s very interested in these ideas. The culpability, responsibility. We premise our legal systems on this notion there is an individual who is to be held accountable. Now, I’m not suggesting that we abandon that, and I’m not sure what you would put in its place, but I think we can all recognize that there are certain situations where we find it very difficult to attribute blame to someone. For example, famously, Charles Whitman, the Texan sniper, when they had the autopsy, they discovered a very sizeable tumor in a region of the brain which could have very much influenced his ability to control his rage. I’m not suggesting every mass murder has inoperable tumors in their brain, but it’s conceivable that there will be, with our increasing knowledge of how the brain operates, and our ability to understand it, it’s conceivable there will be more situations where the lawyers will be looking to put the blame on some biological abnormality.

Where is the line to be drawn? I think that’s a very tough one to deal with. It’s a problem that’s not going to go away. It’s something that we’re going to continually face as we start to learn more about the genetics of aggression.

There’s a lot of interest in this thing called the warrior gene. To what extent is this a gene which predisposes you to violence? Or do you need the interaction between the gene and the abusive childhood in order to get this kind of profile? So it’s not just clinicians, it’s actually just about every realm of human activity where you posit the existence of a self and individuals, and responsibility. Then it will reframe the way you think about things. Just the way that we heap blame and praise, the flip side of blaming people is that we praise individuals. But it could be, in a sense, a multitude of factors that have led them to be successful. I think that it’s a pervasive notion. Whether or not we actually change the way we do anything, I’m not so sure, because I think it would be really hard to live our lives dealing with non-individuals, trying to deal with multitude and the history that everyone brings to the table. There’s a good reason why we have this experience of the self. It’s a very sort of succinct and economical way of interacting with each other. We deal with individuals. We fall in love with individuals, not multitudes of past experiences and aspects of hidden agendas, we just pick them out. (…)

The objects are part of the extended sense of self

I keep tying this back to my issues about why certain objects are overvalued, and I happen to believe, like James again, that objects are part of the extended sense of self. We surround ourselves with objects. We place a lot of value on objects that we think are representative of our self.  (…)

We’re the only species on this planet that invests a lot of time and evaluation through our objects, and this has been something that has been with us for a very, very long time.

Think of some of the early artifacts. The difficulty would have been to make these artifacts, the time invested in these things, means that from a very early point in our civilization, or before civilization, I think the earliest pieces are probably about 90,000 years old. There are certainly older things that are tools, but pieces of artwork, about 90,000 years old. So it’s been with us a long time. And yes, some of them are obviously sacred objects, power of religious purposes and so forth. But outside of that, there’s still this sense of having materials or things that we value, and that intrigues me in so many ways. And I don’t think it’s necessarily universal as well. It’s been around a lot, but the endowment effect, for example, is not found everywhere. There’s some intriguing work coming out of Africa. 

The endowment effect is this rather intriguing idea that we will spontaneously overvalue an object as soon as we believe it’s in our possession, we don’t actually have to have it physically, just bidding on something, as soon as you make your connection to an object, then you value it more, you’ll actually remember more about it, you’ll remember objects which you think are in your possession in comparison to someone else. It gets a whole sense of attribution and value associated with it, which is one of the reasons why people never get the asking price for the things that they’re trying to sell, they always think their objects are worth more than other people are willing to pay for them.

There was the first experimental demonstration by Richard Thaler and Danny Kahneman, and the early behavioral economics, was this demonstration that if you just give people coffee cups, students, coffee cups, and then you ask them to sell it, they always ask more than what someone’s willing to pay for it. It turns out it’s not just coffee cups, it’s wine, it’s chocolate, it’s anything, basically. There’s been quite a bit of work done on the endowment effect now. As I say, it’s been looked at in different species, and the brain mechanisms of having to sell something at a lower price, like loss aversion, it’s seen as quite painful, triggers the same pain centers, if you think you’re going to lose out on a deal

What is it about the objects that give us this self-evaluated sense? Well, I think James spoke of this, again, William James commented on the way that we use objects to extend our self. Russell Belk is a marketing psychologist. He has also talked about the extended self in terms of objects. As I say, this is something that I think marketers know in that they create certain quality brands that are perceived to signal to others how good your social status is.

It’s something in us, but it may not be universal because there are tribes, there are some recent reports from nomadic tribes in central Africa, who don’t seem to have this sense of ownership. It might be a reflection more of the fact that a lot of this work has been done in the West where we’re very individualistic, and of course individualism almost creates a lot of endowment ideas and certainly supports the endowment, materialism that we see. But this is an area I’d like to do more work with because we have not found any evidence of the endowment effect in children below five, six years of age. I’m interested: is this something that just emerges spontaneously? I suspect not. I suspect this is something that culture is definitely shaping. That’s my hunch, so that’s an empirical question I need to pick apart.

The irrational superstitious behaviors

Another line of research I’ve been working on in the past five years … this was a little bit like putting the cart before the horse, so I put forward an idea, it wasn’t entirely original. It was a combination of ideas of others, most notably Pascal Boyer. Paul Bloom, to some extent, had been thinking something similar. A bunch of us were interested in why religion was around. I didn’t want to specifically focus on religion. I wanted to get to the more general point about belief because it was my hunch that even a lot of atheists or self-stated atheists or agnostics, still nevertheless entertained beliefs which were pretty irrational. I wasn’t meaning irrational in a kind of behavioral economics type of way. I meant irrational in that there were these implicit views that would violate the natural laws as we thought about them. Violations of the natural laws I see as being supernatural. That’s what makes them supernatural. I felt that this was an area worth looking at. They’d been looked at 50, 60 years ago very much in the behaviorist association tradition.

BF Skinner famously wrote a paper on the superstitious behavior of pigeons, and he argued if you simply set up a reinforcement schedule at a random kind of interval, pigeons will adopt typical patterns that they think are somehow related to the reward, and then you could shape irrational superstitious behaviors. Now that work has turned out to be a bit dubious and I’m not sure that stood the test of time. But in terms of people’s rituals and routines, it’s quite clear and I know them in myself. There are these things that we do which are familiar, and we get a little bit irritated we don’t get to do them, so we do, most of us, entertain some degree of superstitious behavior.

At the time there was a lot of interest in religion and a lot of the hoo-ha about The God Delusion, and I felt that maybe we just need to redress this idea that it’s all to do with indoctrination, because I couldn’t believe the whole edifice of this kind of belief system was purely indoctrination. I’m not saying there’s not indoctrination, and clearly, religions are culturally transmitted. You’re not born to be Jewish or born to be Christian. But what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination.

We took a lot of the work that Paul Rozin had done, talking about things like killers’ cardigans, and we started to see if there was any empirical measures of transfer. For example, would you find yourself wanting to wash your hands more? Would you find priming effects for words which were related to good and evil, based on whether you had touched the object or not? For me there had to be this issue of physical contact. It struck me as this was why it wasn’t a pure association mechanism. It was actually something to do with the belief, a naïve belief there was some biological entity that can somehow, moral contamination can transfer.

We started to look at, actually not children now, but looking at adults because doing this sort of work with children is very difficult and probably somewhat controversial. But the whole area of research is premised on this idea that there are intuitive ways of seeing the world. Sometimes this is referred to as System One and System Two, or automatic and control. It reappears in a variety of psychological contexts. I just think about it as these unconscious, rapid systems which are triggered automatically. I think their origins are in children. Whilst you can educate people with a kind of slower System Two, if you like, you never eradicate the intuitive ways of seeing the world because they were never taught in the first place. They’re always there. I suppose if you want to ask me if there any kind of thing that you can have as a theory that you haven’t yet proven, it’s the idea is, I don’t think you ever throw away any belief system or any ideas that have been derived through these unconscious intuitive processes. You can supersede them, you can overwrite them, but they never go away, and they will reemerge under the right contexts. If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of ways of thinking. The empirical evidence seems to be supporting that. They’ve got wrinkles in their brains. They’re never going to go away. You can try and override them, but they’re always there and they will reappear under the right circumstances, which is why you see the reemergence under stress of a lot of irrational thinking.

For example, teleological explanations, the idea that everything is made for a purpose or a function, is a natural way to see the world. This is Deb Kelemen's work. You will find that people who considered themselves fairly rational and well educated will, nevertheless, default back to teleological explanations if you put them under a stressful timed kind of situation. So it’s a way of seeing the world that is never eradicated. I think that’s going to be a general principle, in the same way that a reflex, if you think about reflexes, that’s an unlearned behavioral response. You’re born with a whole set of reflexes. Many of them disappear, but they never entirely go away. They become typically reintegrated into more complex behaviors, but if someone goes into a coma, you can see the reflexes reemerging.

What we think is going on is that in the course of development, these very automatic behaviors become controlled by top-down processes from the cortex, all these higher order systems which are regulating and controlling and suppressing, trying to keep these things under wraps. But when the cortex is put out of action through a coma or head injury, then you can see many of these things reemerging again. I don’t see why there should be any point of departure from a motor system to a perceptual system, to a cognitive system, because they’re all basically patterns of neural firing in the brain, and so I don’t see why it can’t be the case that if concepts are derived through these processes, they could remain dormant and latent as well.

The hierarchy of representations in the brain

One of the things that has been fascinating me is the extent to which we can talk about the hierarchy of representations in the brain. Representations are literally re-presentations. That’s the language of the brain, that’s the mode of thinking in the brain, it’s representation. It’s more than likely, in fact, it’s most likely that there is already representation wired into the brain. If you think about the sensory systems, the array of the eye, for example, is already laid out in a topographical representation of the external world, to which it has not yet been exposed. What happens is that this is general layout, arrangements that become fine-tuned. We know of a lot of work to show that the arrangements of the sensory mechanisms do have a spatial arrangement, so that’s not learned in any sense. But these can become changed through experiences, and that’s why the early work of Hubel and Weisel, about the effects of abnormal environments showed that the general pattern could be distorted, but the pattern was already in place in the first place.

When you start to move beyond sensory into perceptual systems and then into cognitive systems, that’s when you get into theoretical arguments and the gloves come off. There are some people who argue that it has to be the case that there are certain primitives built into the conceptual systems. I’m talking about the work of, most notably, Elizabeth Spelke.  

There certainly seems to be a lot of perceptual ability in newborns in terms of constancies, noticing invariant aspects of the physical world. I don’t think I have a problem with any of that, but I suppose this is where the debates go. (…)

Shame in the East is something that is at least recognized as a major factor of identity

I’ve been to Japan a couple of time. I’m not an expert in the cultural variation of cognition, but clearly shame is a major factor in motivation, or avoidance of shame, in eastern cultures. I think it reflects the sense of self worth and value in eastern culture. It is very much a collective notion that they place a lot of emphasis on not letting the team down. I believe they even have a special word for that aspect or experience of shame that we don’t have. That doesn’t mean that it’s a concept that we can never entertain, but it does suggest that in the East this is something that is at least recognized as a major factor of identity.

Children don’t necessarily feel shame. I don’t think they’ve got a sense of self until well into their second year. They have the “I”, they have the notion of being, of having control. They will experience the willingness to move their arms, and I’m sure they make that connection very quickly, so they have this sense of self, in that “I” notion, but I don’t think they’ve got personal identity, and that’s one of the reasons that they don’t have much, or very few of us have much memory of our earlier times. Our episodic memories are very fragmented, sensory events. But from about two to three years on they start to get a sense of who they are. Knowing who you are means becoming integrated into your social environment, and part of becoming integrated into your social environment means acquiring a sense of shame. Below two, three years of age, I don’t think many children have a notion of shame. But from then on, as they have to become members of the social tribe, then they have to be made aware of the consequences of being antisocial or doing things not what’s expected of them. I think that’s probably late in the acquisition.”

Bruce Hood, Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, Essentialism, Edge, May, 17, 2012. (Illustration source)

The Illusion of the Self

"For me, an illusion is a subjective experience that is not what it seems. Illusions are experiences in the mind, but they are not out there in nature. Rather, they are events generated by the brain. Most of us have an experience of a self. I certainly have one, and I do not doubt that others do as well – an autonomous individual with a coherent identity and sense of free will. But that experience is an illusion – it does not exist independently of the person having the experience, and it is certainly not what it seems. That’s not to say that the illusion is pointless. Experiencing a self illusion may have tangible functional benefits in the way we think and act, but that does not mean that it exists as an entity. (…)

For most of us, the sense of our self is as an integrated individual inhabiting a body. I think it is helpful to distinguish between the two ways of thinking about the self that William James talked about. There is conscious awareness of the present moment that he called the “I,” but there is also a self that reflects upon who we are in terms of our history, our current activities and our future plans. James called this aspect of the self, “me” which most of us would recognize as our personal identity—who we think we are. However, I think that both the “I” and the “me” are actually ever-changing narratives generated by our brain to provide a coherent framework to organize the output of all the factors that contribute to our thoughts and behaviors.

I think it helps to compare the experience of self to subjective contours – illusions such as the Kanizsa pattern where you see an invisible shape that is really defined entirely by the surrounding context. People understand that it is a trick of the mind but what they may not appreciate is that the brain is actually generating the neural activation as if the illusory shape was really there. In other words, the brain is hallucinating the experience. There are now many studies revealing that illusions generate brain activity as if they existed. They are not real but the brain treats them as if they were.

Now that line of reasoning could be applied to all perception except that not all perception is an illusion. There are real shapes out there in the world and other physical regularities that generate reliable states in the minds of others. The reason that the status of reality cannot be applied to the self, is that it does not exist independently of my brain alone that is having the experience. It may appear to have a consistency of regularity and stability that makes it seem real, but those properties alone do not make it so.

Similar ideas about the self can be found in Buddhism and the writings of Hume and Spinoza. The difference is that there is now good psychological and physiological evidence to support these ideas that I cover in the book. (…)

There are many cognitive scientists who would doubt that the experience of I is constructed from a multitude of unconscious mechanisms and processes. Me is similarly constructed, though we may be more aware of the events that have shaped it over our lifetime. But neither is cast in stone and both are open to all manner of reinterpretation. As artists, illusionists, movie makers, and more recently experimental psychologists have repeatedly shown, conscious experience is highly manipulatable and context dependent. Our memories are also largely abstracted reinterpretations of events – we all hold distorted memories of past experiences. (…)

The developmental processes that shape our brains from infancy onwards to create our identities as well as the systematic biases that distort the content of our identity to form a consistent narrative. I believe much of that distortion and bias is socially relevant in terms of how we would like to be seen by others. We all think we would act and behave in a certain way, but the reality is that we are often mistaken. (…)

Q: What role do you think childhood plays in shaping the self?

Just about everything we value in life has something to do with other people. Much of that influence occurs early in our development, which is one reason why human childhoods are so prolonged in comparison to other species. We invest so much effort and time into our children to pass on as much knowledge and experience as possible. It is worth noting that other species that have long periods of rearing also tend to be more social and intelligent in terms of flexible, adaptive behaviors. Babies are born social from the start but they develop their sense of self throughout childhood as they move to become independent adults that eventually reproduce. I would contend that the self continues to develop throughout a lifetime, especially as our roles change to accommodate others. (…)

The role of social networking in the way we portray our self

There are some interesting phenomena emerging. There is evidence of homophily – the grouping together of individuals who share a common perspective, which is not too surprising. More interesting is evidence of polarization. Rather than opening up and exposing us to different perspectives, social networking on the Internet can foster more radicalization as we seek out others who share our positions. The more others validate our opinions, the more extreme we become. I don’t think we need to be fearful, and I am less concerned than the prophets of doom who predict the downfall of human civilization, but I believe it is true that the way we create the narrative of the self is changing.

Q: If the self is an illusion, what is your position on free will?

Free will is certainly a major component of the self illusion, but it is not synonymous. Both are illusions, but the self illusion extends beyond the issues of choice and culpability to other realms of human experience. From what I understand, I think you and I share the same basic position about the logical impossibility of free will. I also think that compatibilism (that determinism and free will can co-exist) is incoherent. We certainly have more choices today to do things that are not in accord with our biology, and it may be true that we should talk about free will in a meaningful way, as Dennett has argued, but that seems irrelevant to the central problem of positing an entity that can make choices independently of the multitude of factors that control a decision. To me, the problem of free will is a logical impasse – we cannot choose the factors that ultimately influence what we do and think. That does not mean that we throw away the social, moral, and legal rulebooks, but we need to be vigilant about the way our attitudes about individuals will be challenged as we come to understand the factors (both material and psychological) that control our behaviors when it comes to attributing praise and blame. I believe this is somewhat akin to your position. (…)

The self illusion explains so many aspects of human behavior as well as our attitudes toward others. When we judge others, we consider them responsible for their actions. But was Mary Bale, the bank worker from Coventry who was caught on video dropping a cat into a garbage can, being true to her self? Or was Mel Gibson’s drunken anti-Semitic rant being himself or under the influence of someone else? What motivated Senator Weiner to text naked pictures of himself to women he did not know? In the book, I consider some of the extremes of human behavior from mass murderers with brain tumors that may have made them kill, to rising politicians who self-destruct. By rejecting the notion of a core self and considering how we are a multitude of competing urges and impulses, I think it is easier to understand why we suddenly go off the rails. It explains why we act, often unconsciously, in a way that is inconsistent with our self image – or the image of our self as we believe others see us.

That said, the self illusion is probably an inescapable experience we need for interacting with others and the world, and indeed we cannot readily abandon or ignore its influence, but we should be skeptical that each of us is the coherent, integrated entity we assume we are.

Bruce Hood Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, interviewed by Sam Harris, The Illusion of the Self, Sam Harris blog, May 22, 2012.

See also:

Existence: What is the self?, Lapidarium notes
Paul King on what is the best explanation for identity
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

Jun
3rd
Fri
permalink

Why people believe in strange things

                    image

"Science is founded on the conviction that experience, effort, and reason are valid; magic on the belief that hope cannot fail nor desire deceive." Bronislaw Malinowski, Magic, Science, and Religion, 1948

Aristotle maintained that women have fewer teeth than men; although he was twice married, it never occurred to him to verify this statement by examining his wives’ mouths.”Bertrand Russell, British philosopher, logician, mathematician, historian, and social critic, (1872-1970), The Impact of Science on Society, 1952

"According to a 2009 Harris Poll of 2,303 adult Americans, when people are asked to “Please indicate for each one if you believe in it, or not,” the following results were revealing:

 82% believe in God
 76% believe in miracles
 75% believe in Heaven
 73% believe in Jesus is God or the Son of God
 72% believe in angels
 71% believe in survival of the soul after death
 70% believe in the resurrection of Jesus Christ
 61% believe in hell
 61% believe in the virgin birth (of Jesus)
 60% believe in the devil
 45% believe in Darwin’s Theory of Evolution
 42% believe in ghosts
 40% believe in creationism
 32% believe in UFOs
 26% believe in astrology
 23% believe in witches
 20% believe in reincarnation

More people believe in angels and the devil than believe in the theory of evolution.”
— GALLUP, Paranormal Beliefs Come (Super) Naturally to Some

See also:
Evolution, Creationism, Intelligent Design (Gallup statistics)
Evolution, the Muslim world & religious beliefs
(statistics), Discovery Magazine, 2009 

"Belief in pseudoscience, including astrology, extrasensory perception (ESP), and alien abductions, is relatively widespread and growing. For example, in response to the 2001 NSF survey, a sizable minority (41 percent) of the public said that astrology was at least somewhat scientific, and a solid majority (60 percent) agreed with the statement “some people possess psychic powers or ESP.” Gallup polls show substantial gains in almost every category of pseudoscience during the past decade. Such beliefs may sometimes be fueled by the media’s miscommunication of science and the scientific process."

— National Science Foundation. 2002. Science Indicators Biennial Report. The section on pseudoscience, “Science Fiction and Pseudoscience,” is in Chapter 7

"70% of Americans still do not understand the scientific process, defined in the NSF study as grasping probability, the experimental method, and hypothesis testing. (…)

Belief change comes from a combination of personal psychological readiness and a deeper social and cultural shift in the underlying zeitgeist of the times, which is affected in part by education, but is more the product of larger and harder-to-define political, economic, religious, and social changes.”

Michael Shermer, The Believing Brain, Times Books, 2011

Michael Shermer: The Believing Brain

"In The Believing Brain, Michael Shermer argues that "belief-dependent realism" makes it hard for any of us to have an objective view of the world (…)

Philosophers of science have long argued that our theories, or beliefs, are the lenses through which we see the world, making it difficult for us to access an objective reality.

So where do our beliefs come from? In The Believing Brain Shermer argues that they are derived from “patternicity”, our propensity to see patterns in noise, real or imagined; and “agenticity”, our tendency to attribute a mind and intentions to that pattern. These evolved skills - which saved our ancestors who assumed, say, a rustling in the bushes was a predator intending to eat them - are the same attributes that lead us to believe in ghosts, conspiracies and gods.

In fact, neuroimaging studies have shown that, at the level of the brain, belief in a virgin birth or a UFO is no different than belief that two plus two equals four or that Barack Obama is president of the US. “We can no more eliminate superstitious learning than we can eliminate all learning,” writes Shermer. "People believe weird things because of our evolved need to believe non-weird things." (…)

As for our quest for objective reality, Shermer argues that science is our greatest hope. By requiring replicable data and peer review, science, he says, is the only process of knowledge-gathering that can go beyond our individual lenses of belief.”

— Amanda Gefter writing about Michael Shermer, The prison of our beliefs and how to escape it, NewScientist, 1 June 2011.

Children are born with the ability to perceive cause-effect relations. Our brains are natural machines for piecing together events that may be related and for solving problems that require our attention. We can envision an ancient hominid from Africa chipping and grinding and shaping a rock into a sharp tool for carving up a large mammalian carcass. Or perhaps we can imagine the first individual who discovered that knocking flint would create a spark that would light a fire. The wheel, the lever, the bow and arrow, the plow—inventions intended to allow us to shape our environment rather than be shaped by it—started us down a path that led to our modern scientific and technological world.

On the most basic level, we must think to remain alive. To think is the most essential human characteristic. Over three centuries ago, the French mathematician and philosopher Rene Descartes, after one of the most thorough and skeptical purges in intellectual history, concluded that he knew one thing for certain: "Cogito ergo sum—I think therefore I am." But to be human is to think. To reverse Descartes, "Sum ergo cogito—I am therefore I think." (…)

Michael Shermer, American science writer, historian of science, Why People Believe Weird Things, Henry Holt and Company, New York, 2002, p. 23. 

Michael Shermer: Why people believe strange things | TED



Why do people see the Virgin Mary on cheese sandwiches or hear demonic lyrics in “Stairway to Heaven”? Using video, images and music, Michael Shermer explores these and other phenomena, including UFOs and alien sightings. He offers cognitive context: In the absence of sound science, incomplete information can combine with the power of suggestion (helping us hear those Satanic lyrics in Led Zeppelin). In fact, he says, humans tend to convince ourselves to believe: We overvalue the ‘hits’.”

Michael Shermer, American science writer, historian of science, Why people believe strange things, TED.com

Michael Shermer: The pattern behind self-deception | TED

In this video Michael Shermer says the human tendency to believe strange things — from alien abductions to dowsing rods — boils down to two of the brain’s most basic, hard-wired survival skills. He explains what they are, and how they get us into trouble. Michael Shermer debunks myths, superstitions and urban legends, and explains why we believe them. (TED.com, Feb 2010)

Why do we believe in God? We are religious because we are paranoid | Psychology Today

Error Management Theory suggests that, in your inference, you can make a “Type I” error of false positive or “Type II” error of false negative, and these two types of error carry vastly different consequences and costs. The cost of a false-positive error is that you become paranoid. You are always looking around and behind your back for predators and enemies that don’t exist. The cost of a false-negative error is that you are dead, being killed by a predator or an enemy when you least expect them. Obviously, it’s better to be paranoid than dead, so evolution should have designed a mind that overinfers personal, animate, and intentional forces even when none exist.

image

Different theorists call this innate human tendency to commit false-positive errors rather than false-negative errors (and as a consequence be a bit paranoid) “animistic bias” or “the agency-detector mechanism.” These theorists argue that the evolutionary origins of religious beliefs in supernatural forces may have come from such an innate cognitive bias to commit false-positive errors rather than false-negative errors, and thus overinfer personal, intentional, and animate forces behind otherwise perfectly natural phenomena. (…)

In this view, religiosity (the human capacity for belief in supernatural beings) is not an evolved tendency per se; after all, religion in itself is not adaptive. It is instead a byproduct of animistic bias or the agency-detector mechanism, the tendency to be paranoid, which is adaptive because it can save your life. Humans did not evolve to be religious; they evolved to be paranoid. And humans are religious because they are paranoid.”

Satoshi Kanazawaevolutionary psychologist at the London School of Economics, Why do we believe in God?, Psychology Today, March 28, 2008. (More). ☞ See also: Martie G. Haselton and David M. Buss, Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading, University of Texas at Austin (pdf)

A Cross-National Test of the Uncertainty Hypothesis of Religious Belief

"According to the uncertainty hypothesis, religion helps people cope psychologically with dangerous or unpredictable situations. Conversely, with greater control over the external environment due to economic development and technological advances, religious belief is predicted to decline (the existential security hypothesis). The author predicts that religious belief would decline in economically developed countries where there is greater existential security, including income security (income equality and redistribution via welfare states) and improved health.

These predictions are tested in regression analyses of 137 countries that partialed out the effects of Communism and Islamic religion both of which affect the incidence of reported nonbelief. Findings show that disbelief in God increased with economic development (measured by lower agricultural employment and third-level enrollment). Findings further show that disbelief also increased with income security (low Gini coefficient, high personal taxation tapping the welfare state) and with health security (low pathogen prevalence). Results show that religious belief declines as existential security increases, consistent with the uncertainty hypothesis.”

Nigel Barber, A Cross-National Test of the Uncertainty Hypothesis of Religious Belief, 2011

"As to the distribution of atheism in the world, a clear pattern can be discerned. In sub-Saharan Africa there is almost no atheism (2%). Belief in God declines in more developed countries and atheism is concentrated in Europe in countries such as Sweden (64% nonbelievers), Denmark (48%), France (44%) and Germany (42%). In contrast, the incidence of atheism in most sub-Saharan countries is below 1%. (…)

Anthropologist James Fraser proposed that scientific prediction and control of nature supplants religion as a means of controlling uncertainty in our lives. This hunch is supported by data showing that the more educated countries have higher levels of non belief and there are strong correlations between atheism and intelligence. (…)

It seems that people turn to religion as a salve for the difficulties and uncertainties of their lives. In social democracies, there is less fear and uncertainty about the future because social welfare programs provide a safety net and better health care means that fewer people can expect to die young. People who are less vulnerable to the hostile forces of nature feel more in control of their lives and less in need of religion. Hence my finding of belief in God being higher in countries with a heavy load of infectious diseases. (…)”

Nigel Barber, Ph.D. in Biopsychology from Hunter College, CUNY, and taught psychology at Bemidji State University and Birmingham Southern College, Why Atheism Will Replace Religion. With economic security, people abandon religion, Psychology Today, July 14, 2011

Why We Don’t Believe In Science

   image

"Gallup announced the results of their latest survey on Americans and evolution. The numbers were a stark blow to high-school science teachers everywhere: forty-six per cent of adults said they believed that “God created humans in their present form within the last 10,000 years.” Only fifteen per cent agreed with the statement that humans had evolved without the guidance of a divine power.

What’s most remarkable about these numbers is their stability: these percentages have remained virtually unchanged since Gallup began asking the question, thirty years ago. (…)

A new study in Cognition, led by Andrew Shtulman at Occidental College, helps explain the stubbornness of our ignorance. As Shtulman notes, people are not blank slates, eager to assimilate the latest experiments into their world view. Rather, we come equipped with all sorts of naïve intuitions about the world, many of which are untrue. For instance, people naturally believe that heat is a kind of substance, and that the sun revolves around the earth. And then there’s the irony of evolution: our views about our own development don’t seem to be evolving.
This means that science education is not simply a matter of learning new theories. Rather, it also requires that students unlearn their instincts, shedding false beliefs the way a snake sheds its old skin. (…)

As expected, it took students much longer to assess the veracity of true scientific statements that cut against our instincts. In every scientific category, from evolution to astronomy to thermodynamics, students paused before agreeing that the earth revolves around the sun, or that pressure produces heat, or that air is composed of matter. Although we know these things are true, we have to push back against our instincts, which leads to a measurable delay.

What’s surprising about these results is that even after we internalize a scientific concept—the vast majority of adults now acknowledge the Copernican truth that the earth is not the center of the universe—that primal belief lingers in the mind. We never fully unlearn our mistaken intuitions about the world. We just learn to ignore them.

Shtulman and colleagues summarize their findings:

When students learn scientific theories that conflict with earlier, naïve theories, what happens to the earlier theories? Our findings suggest that naïve theories are suppressed by scientific theories but not supplanted by them.
(…)

Until we understand why some people believe in science we will never understand why most people don’t.

In a 2003 study, Kevin Dunbar, a psychologist at the University of Maryland, showed undergraduates a few short videos of two different-sized balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time—a refutation of Aristotle, who claimed that heavier objects fell faster.

While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. They found the two balls falling at the same rate to be deeply unrealistic. (Intuitively, we’re all Aristotelians.)

Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: there was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the center of the brain. The A.C.C. is typically associated with the perception of errors and contradictions—neuroscientists often refer to it as part of the “Oh shit!” circuit—so it makes sense that it would be turned on when we watch a video of something that seems wrong, even if it’s right.

This data isn’t shocking; we already know that most undergrads lack a basic understanding of science. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to identify the error; they knew Galileo’s version was correct.

But it turned out that something interesting was happening inside their brains that allowed them to hold this belief. When they saw the scientifically correct video, blood flow increased to a part of the brain called the dorsolateral prefrontal cortex, or D.L.P.F.C. The D.L.P.F.C. is located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that aren’t helpful or useful. If you don’t want to think about the ice cream in the freezer, or need to focus on some tedious task, your D.L.P.F.C. is probably hard at work.

According to Dunbar, the reason the physics majors had to recruit the D.L.P.F.C. is because they were busy suppressing their intuitions, resisting the allure of Aristotle’s error. It would be so much more convenient if the laws of physics lined up with our naïve beliefs—or if evolution was wrong and living things didn’t evolve through random mutation. But reality is not a mirror; science is full of awkward facts. And this is why believing in the right version of things takes work.

Of course, that extra mental labor isn’t always pleasant. (There’s a reason they call it “cognitive dissonance.”) It took a few hundred years for the Copernican revolution to go mainstream. At the present rate, the Darwinian revolution, at least in America, will take just as long.”

Jonah Lehrer, Why We Don’t Believe In Science, The New Yorker, June 7, 2012. (Illustration courtesy of Hulton Archive/Getty Images.)

See also: ☞ A. Shtulman, J. Valcarcel , Scientific knowledge suppresses but does not supplant earlier intuitions (pdf), Department of Psychology, Occidental College, 2012.

[This note will be gradually expanded…]

See also:

☞ D. Kapogiannis, A. K. Barbey, M. Su, G. Zambon, F. Krueger, J. Grafman, Cognitive and neural foundations of religious belief, Washington University School of Medicine, 2009 
☞ D. Kapogiannis, A. K. Barbey, M. Su, F. Krueger, J. Grafman, Neuroanatomical Variability of Religiosity, National Institutes of Health, National Institute of Neurological Disorders and Stroke (NINDS), USA, Department of Psychology, Georgetown University, Washington, D. C., 2009
Is This Your Brain On God? (visualization), NPR
Andy Thomson, Why We Believe in Gods, Atlanta, Georgia 2009 (video lecture)
☞ Dr. Andy Thomson, "Why We Believe in God(s)", The Triangle Freethought Society, May 16th, 2011 (video lecture)
Jared Diamond, The Evolution of Religions, 2009 (video lecture)
Dan Dennett, A Darwinian Perspective on Religions: Past, Present and Future, (video lecture)
☞ Jesse Bering, We are programmed to believe in a god, Guardian, 4 January 2011 
☞ Michael Brooks, Born believers: How your brain creates God , New Scientist, 4 Feb 2009
Dan Ariely, We’re All Predictably Irrational, FORA.tv
The Believing Brain: Why Science Is the Only Way Out of Belief-Dependent Realism, Scientific American, July 5, 2011
'The Cognition, Religion and Theology Project' - Summary led by Dr Justin Barrett, from the Centre for Anthropology and Mind at Oxford University, trying to understand the underpinnings of religious thought and practice through application of the cognitive sciences, 2011
☞ Robert Bellah, The Roots of Religion. Where did religion come from? Robert Bellah ponders its evolutionary origins, Big Questions, Oct 3, 2011
☞ S. Pinker, Scott Atran and others on Where God and Science Meet. How Brain and Evolutionary Studies Alter Our Understanding of Religion Edited by Patrick McNamara (pdf)
Biologist E.O. Wilson on Why Humans, Like Ants, Need a Tribe, Newsweek Magazine, Apr 2, 2012
Pareidolia — a psychological phenomenon involving a vague and random stimulus (often an image or sound) being perceived as significant. Common examples include seeing images of animals or faces in clouds, the man in the moon or the Moon rabbit, and hearing hidden messages on records played in reverse. The word comes from the Greek para- – “beside”, “with”, or “alongside”—meaning, in this context, something faulty or wrong (as in paraphasia, disordered speech) and eidōlon – “image”; the diminutive of eidos – “image”, “form”, “shape”. Pareidolia is a type of apophenia. (Wiki)
Visions For All. People who report vivid religious experiences may hold clues to nonpsychotic hallucinations, Science News, Apr 7, 2012.
Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
The whys of religion vs. evolution. Evolutionary biologist Jerry Coyne examines why Americans often choose faith over scientific findings, Harvard University Gazette, May 8, 2012.
In U.S., 46% Hold Creationist View of Human Origins, GALLUP, June 1, 2012.
☞ Daisy Grewal, How Critical Thinkers Lose Their Faith in God, Scientific American, June 1, 2012.
Religion tag on Lapidarium

Mar
28th
Mon
permalink

'We' vs 'Others': Russell Jacoby on why we should fear our neighbors more than strangers

         
                                         Titian, “Cain and Abel”, Venice

"Orientalism was ultimately a political vision of reality whose structure promoted the difference between the familiar (Europe, the West, ‘us’) and the strange (the Orient, the East, ‘them’)"Edward Said, Orientalism (1978)

"Academics are thrilled with the "other" and the vagaries of how we represent the foreign. By profession, anthropologists are visitors from afar. We are outsiders, writes an anthropologist, "seeking to understand unfamiliar cultures." Humanists and social theorists also have fallen in love with the "other." A recent paper by the literary critic Toril Moi is titled "Literature, Philosophy, and the Question of the Other." In a recent issue of Signs, a philosopher writes about “Occidental Dreams: Orientalism and History in ‘The Second Sex.’”

The romance with the “other,” the Orient, and the stranger, however, diverts attention from something less sexy: the familiar. For those concerned with strife and violence in the world, like Said, the latter may, in fact, be more critical than the strange and the foreign. If the Lebanese Civil War, which lasted 15 years, can highlight something about how the West represents the East, it can also foreground a neglected truth: The most decisive antagonisms and misunderstandings take place within a community. The history of hatred and violence is, to a surprising degree, a history of brother against brother, not brother against stranger. From Cain and Abel to the religious wars of the 16th and 17th centuries and the civil wars of our own age, it is not so often strangers who elicit hatred, but neighbors.

This observation contradicts both common sense and the collective wisdom of teachers and preachers, who declaim that we fear—sometimes for good reason—the unknown and dangerous stranger. Citizens and scholars alike believe that enemies lurk in the street and beyond the street, where we confront a “clash of civilizations” with foreigners who challenge our way of life.

The truth is more unsettling. From assault to genocide, from assassination to massacre, violence usually emerges from inside the fold rather than outside it. (…)

We may obsess about strangers piloting airplanes into our buildings, but in the United States in any year, roughly five times the number of those killed in the World Trade Center are murdered on the streets or inside their own homes and offices. These regular losses remind us that most criminal violence takes place between people who know each other. Cautious citizens may push for better street lighting, but they are much more likely to be assaulted, even killed, in the light of the kitchen by someone familiar than in a parking garage by a stranger. Like, not unlike, prompts violence.

Civil wars are generally more savage, and bear more lasting consequences, than wars between countries. Many more people died in the American Civil War—at a time when the population was a tenth of what it is today—than in any other American conflict, and its long-term effects probably surpass those of the others. Major bloodlettings of the 20th century—hundreds of thousands to millions of deaths—occurred in civil wars such as the Russian Civil War, the Chinese Civil Wars of 1927-37 and 1945-49, and the Spanish Civil War. More Russian lives were lost in the Russian Civil War that followed World War I than in the Great War itself, for instance.

But who cares about the Russian Civil War? A thousand books and courses dwell on World War I, but few on the Russian Civil War that emerged from it. That war, with its fluid battle lines, uncertain alliances, and clouded beginning, seems too murky. The stew of hostilities is typical of civil wars, however. With some notable exceptions, modern civil wars resist the clear categories of interstate wars. The edges are blurred. Revenge often trumps ideology and politics.

Yet civil strife increasingly characterizes the contemporary world. “Most wars are now civil wars,” announces the first sentence of a World Bank publication. Not only are there more civil wars, but they last longer. The conflicts in southern Sudan have been going on for decades. Lengthy battles between states are rare nowadays. And when states do attack, the fighting generally doesn’t last long (for example, Israel’s monthlong incursion into Lebanon in 2006). The recent wars waged by the United States in Iraq and Afghanistan are notable exceptions.

We live in an era of ethnic, national, and religious fratricide. A new two-volume reference work on “the most severe civil wars since World War II” has 41 entries, from Afghanistan and Algeria to Yemen and Zimbabwe. Over the last 50 years, the number of casualties of intrastate conflicts is roughly five times that of interstate wars. The number of refugees from these conflicts similarly dwarfs those from traditional state-versus-state wars. “Cases such as Afghanistan, Somalia, and Lebanon testify to the economic devastation that civil wars can produce,” note two political scientists. By the indexes of deaths, numbers of refugees, and extent of destruction, they conclude that "civil war has been a far greater scourge than interstate war" in recent decades. In Iraq today—putting aside blame and cause—more Iraqis are killed by their countrymen than by the American military.

"Not surprisingly, there is no treatise on civil war on the order of Carl von Clausewitz's On War,” writes the historian Arno Mayer, “civil wars being essentially wild and savage.”

The iconic book by Carl von Clausewitz, the Prussian military thinker, evokes the spirit of Immanuel Kant, whose writings he studied. Subheadings such as “The Knowledge in War Is Very Simple, but Not, at the Same Time, Very Easy” suggest its philosophical structure. Clausewitz subordinated war to policy, which entailed a rational evaluation of goals and methods. He compared the state to an individual. “Policy” is “the product of its brain,” and war is an option. “No one starts a war—or rather, no one in his senses ought to do so—without first being clear in his mind what he intends to achieve by that war and how he intends to conduct it.” If civilized nations at war “do not put their prisoners to death” or “devastate cities,” he writes, it is because “intelligence plays a larger part in their methods of warfare … than the crude expressions of instinct.”

In civil wars, by contrast, prisoners are put to death and cities destroyed as a matter of course. The ancient Greeks had already characterized civil strife as more violent than traditional war. Plato distinguishes war against outsiders from what he calls factionalized struggles, that is, civil wars. He posits that Greeks practice war against foreigners (“barbarians”), a conflict marked by “enmity and hatred,” but not against one another. When Greeks fight Greeks, he believes, they should temper their violence in anticipation of reconciliation. “They will not, being Greeks, ravage Greek territory nor burn habitations,” nor “lay waste the soil,” nor treat all “men, women, and children” as their enemies. Such, at least, was his hope in the Republic, but the real world often contradicted it, as he knew. His proposition that Greeks should not ravage Greeks challenged the reality in which Greeks did exactly that.

Plato did not have to look further than Thucydides' account of the Peloponnesian War to find confirmation of the brutality of Greek-on-Greek strife. In a passage often commented on, Thucydides wrote of the seesaw battle in Corcyra (Corfu) in 433 BC, which prefigured the larger war. When the Athenians approached the island in force, the faction they supported seized the occasion to settle accounts with its adversaries. In Thucydides’ telling, this was a “savage” civil war of Corcyrean against Corcyrean. For the seven days the Athenians stayed in the harbor, Corcyreans “continued to massacre those of their own citizens” they considered enemies. “There was death in every shape and form,” writes Thucydides. “People went to every extreme and beyond it. There were fathers who killed their sons; men were dragged from the temples or butchered on the very altars.” Families turned on families. “Blood ties became more foreign than factional ones.” Loyalty to the faction overrode loyalty to family members, who became the enemy.

Nearly 2,500 years after Thucydides, the presiding judge at a United Nations trial invoked the Greek historian. The judge reflected on what had occurred in the former Yugoslavia. One Duško Tadić stood accused of the torture and murder of Muslims in his hometown in Bosnia-Herzegovina. His actions exemplified a war of ethnic cleansing fueled by resentment and hatred. “Some time ago, yet not far from where the events in this case happened,” something similar occurred, stated a judge in his 1999 opinion. He cited Thucydides’ description of the Corcyrean civil war as one of “savage and pitiless actions.” Then as today, the judge reminded us, men “were swept away into an internecine struggle” in which vengeance supplanted justice.

Today’s principal global conflicts are fratricidal struggles—regional, ethnic, and religious: Iraqi Sunni vs. Iraqi Shiite, Rwandan Tutsi vs. Rwandan Hutu, Bosnian Muslim vs. Balkan Christians, Sudanese southerners vs. Sudanese northerners, perhaps Libyan vs. Libyan. As a Rwandan minister declared about the genocide in which Hutus slaughtered Tutsis: “Your neighbors killed you.” A reporter in northeastern Congo wrote that in seven months of fighting there, several thousand people were killed and more than 100,000 driven from their homes. He commented, "Like ethnic conflicts around the globe, this is fundamentally a fight between brothers: The two tribes—the Hema and the Lendu—speak the same language, marry each other, and compete for the same remote and thickly populated land.”

Somalia is perhaps the signal example of this ubiquitous fratricidal strife. As a Somalian-American professor observed, Somalia can claim a “homogeneity rarely known elsewhere in Africa.” The Somalian people “share a common language (Somali), a religion (Islam), physical characteristics, and pastoral and agropastoral customs and traditions.” This has not tempered violence. On the contrary.

The proposition that violence derives from kith and kin overturns a core liberal belief that we assault and are assaulted by those who are strangers to us. If that were so, the solution would be at hand: Get to know the stranger. Talk with the stranger. Reach out. The cure for violence is better communication, perhaps better education. Study foreign cultures and peoples. Unfortunately, however, our brother, our neighbor, enrages us precisely because we understand him. Cain knew his brother—he “talked with Abel his brother”—and slew him afterward.

We don’t like this truth. We prefer to fear strangers. We like to believe that fundamental differences pit people against one another, that world hostilities are driven by antagonistic principles about how society should be constituted. To think that scale—economic deprivation, for instance—rather than substance divides the world seems to trivialize the stakes. We opt instead for a scenario of clashing civilizations, such as the hostility between Western and Islamic cultures. The notion of colliding worlds is more appealing than the opposite: conflicts hinging on small differences. A “clash” implies that fundamental principles about human rights and life are at risk.

Samuel Huntington took the phrase “clash of civilizations" from the Princeton University historian Bernard Lewis, who was referring to a threat from the Islamic world. “We are facing a mood and a movement far transcending the level of issues and policies,” Lewis wrote in 1990. “This is no less than a clash of civilizations” and a challenge to “our Judeo-Christian heritage.” For Huntington, “the underlying problem for the West is not Islamic fundamentalism. It is Islam, a different civilization.” (…)

Or consider the words of a Hindu nationalist who addressed the conflict with Indian Muslims. How is unity to come about, she asks? “The Hindu faces this way, the Muslim the other. The Hindu writes from left to right, the Muslim from right to left. The Hindu prays to the rising sun, the Muslim faces the setting sun when praying. If the Hindu eats with the right hand, the Muslim with the left. … The Hindu worships the cow, the Muslim attains paradise by eating beef. The Hindu keeps a mustache, the Muslim always shaves the upper lip.”

Yet the preachers, porte-paroles, and proselytizers may mislead; it is in their interest to do so. What divided the Protestants and Catholics in 16th-century France, the Germans and Jews in 20th-century Europe, and the Shia and Sunni today may be small, not large. But minor differences rankle more than large differences. Indeed, in today’s world, it may be not so much differences but their diminution that provokes antagonism. Here it can be useful to attend the literary critic René Girard, who also bucks conventional wisdom by signaling the danger in similitude, not difference: “In human relationships, words like ‘sameness and ‘similarity evoke an image of harmony. If we have the same tastes and like the same things, surely we are bound to get along. But what will happen when we share the same desires?” However, for Girard, “a single principle” pervades religion and literature. “Order, peace, and fecundity depend on cultural distinctions; it is not these distinctions but the loss of them that gives birth to fierce rivalries and sets members of the same family or social group at one another’s throats.”

Likeness does not necessarily lead to harmony. It may elicit jealousy and anger. Inasmuch as identity rests on what makes an individual unique, similitude threatens the self. The mechanism also operates on social terrain. As cultural groups get absorbed into larger or stronger collectives, they become more anxious—and more prone to defend their dwindling identity. French Canadians—living as they do amid an ocean of English speakers—are more testy about their language than the French in France. Language, however, is just one feature of cultural identification.

Assimilation becomes a threat, not a promise. It spells homogenization, not diversity. The assimilated express bitterness as they register the loss of an identity they wish to retain. Their ambivalence transforms their anger into resentment. They desire what they reject and are consequently unhappy with themselves as well as their interlocutor. Resentment feeds protest and sometimes violence. Insofar as the extreme Islamists sense their world imitating the West, they respond with increased enmity. It is not so much the “other” as it is the absence of otherness that spurs anger. They fear losing themselves by mimicking the West. A Miss World beauty pageant in Nigeria spurred widespread riots by Muslims that left hundreds dead. This could be considered a violent rejection of imitation.

We hate the neighbor we are enjoined to love. Why? Why do small disparities between people provoke greater hatred than the large ones? Perhaps the work of Freud helps chart the underground sources of fratricidal violence. Freud introduced the phrase the narcissism of minor differences" to describe this phenomenon. He noted that "it is precisely the little dissimilarities in persons who are otherwise alike that arouse feelings of strangeness and enmity between them.

Freud first broached the narcissism of minor differences in The Taboo of Virginity,” an essay in which he also took up the “dread of woman.” Is it possible that these two notions are linked? That the narcissism of minor differences, the instigator of enmity, arises from differences between the sexes and, more exactly, man’s fear of woman? What do men fear? “Perhaps,” Freud hazards, the dread is “founded on the difference of woman from man.” More precisely, “man fears that his strength will be taken from him by woman, dreads becoming infected with her femininity” and that he will show himself to be a “weakling.” Might this be a root of violence, man’s fear of being unmanned?

The sources of hatred and violence are many, not singular. There is room for the findings of biologists, sociobiologists, and other scientists. For too long, however, social and literary scholars have dwelled on the “other” and its representation. It is interesting, even uplifting, to talk about how we see and don’t see the stranger. It is less pleasant, however, to tackle the divisiveness and rancor of countrymen and kin. We still have not caught up to Montaigne, with his famous remarks about Brazilian cannibals. He reminded his 16th-century readers not only that the mutual slaughter of Huguenots and Catholics eclipsed the violence of New World denizens—it was enacted on the living, and not on the dead—but that its agents were “our fellow citizens and neighbors.”

Russell Jacoby, professor of history at the University of California Los Angeles (UCLA). This essay is adapted from his book Bloodlust: On the Roots of Violence From Cain and Abel to the Present, Bloodlust. Why we should fear our neighbors more than strangers, The Chronicle Review, March 27, 2011.

See also:

Roger Dale Petersen, Understanding Ethnic Violence: fear, hatred, and resentment in twentieth-century Eastern Europe, Cambridge University Press, 2002.
Stephen M. Walt on What Does Social Science Tell Us about Intervention in Libya
Scott Atran on Why War Is Never Really Rational
Steven Pinker on the History and decline of Violence
Violence tag on Lapidarium notes

Apr
17th
Sat
permalink
Why do we believe in God? We are religious because we are paranoid | Psychology Today
”(…) Error Management Theory suggests that, in your inference, you can make a “Type I” error of false positive or “Type II” error of false negative, and these two types of error carry vastly different consequences and costs. The cost of a false-positive error is that you become paranoid. You are always looking around and behind your back for predators and enemies that don’t exist. The cost of a false-negative error is that you are dead, being killed by a predator or an enemy when you least expect them. Obviously, it’s better to be paranoid than dead, so evolution should have designed a mind that overinfers personal, animate, and intentional forces even when none exist.
Different theorists call this innate human tendency to commit false-positive errors rather than false-negative errors (and as a consequence be a bit paranoid) “animistic bias” or “the agency-detector mechanism.” These theorists argue that the evolutionary origins of religious beliefs in supernatural forces may have come from such an innate cognitive bias to commit false-positive errors rather than false-negative errors, and thus overinfer personal, intentional, and animate forces behind otherwise perfectly natural phenomena. (…)
In this view, religiosity (the human capacity for belief in supernatural beings) is not an evolved tendency per se; after all, religion in itself is not adaptive. It is instead a byproduct of animistic bias or the agency-detector mechanism, the tendency to be paranoid, which is adaptive because it can save your life. Humans did not evolve to be religious; they evolved to be paranoid. And humans are religious because they are paranoid. (…)”
— Satoshi Kanazawa, Why do we believe in God?, Psychology Today, March 28, 2008. (More). See also: Martie G. Haselton and David M. Buss, Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading, University of Texas at Austin (pdf)
Michael Shermer: The pattern behind self-deception | TED.com



In this video Michael Shermer says the human tendency to believe strange things — from alien abductions to dowsing rods — boils down to two of the brain’s most basic, hard-wired survival skills. He explains what they are, and how they get us into trouble. Michael Shermer debunks myths, superstitions and urban legends, and explains why we believe them. (Source: TED.com, Feb 2010. 
See also: ☞ Why people believe in strange things, Lapidarium resume 

Why do we believe in God? We are religious because we are paranoid | Psychology Today

”(…) Error Management Theory suggests that, in your inference, you can make a “Type I” error of false positive or “Type II” error of false negative, and these two types of error carry vastly different consequences and costs. The cost of a false-positive error is that you become paranoid. You are always looking around and behind your back for predators and enemies that don’t exist. The cost of a false-negative error is that you are dead, being killed by a predator or an enemy when you least expect them. Obviously, it’s better to be paranoid than dead, so evolution should have designed a mind that overinfers personal, animate, and intentional forces even when none exist.

Different theorists call this innate human tendency to commit false-positive errors rather than false-negative errors (and as a consequence be a bit paranoid) “animistic bias” or “the agency-detector mechanism.” These theorists argue that the evolutionary origins of religious beliefs in supernatural forces may have come from such an innate cognitive bias to commit false-positive errors rather than false-negative errors, and thus overinfer personal, intentional, and animate forces behind otherwise perfectly natural phenomena. (…)

In this view, religiosity (the human capacity for belief in supernatural beings) is not an evolved tendency per se; after all, religion in itself is not adaptive. It is instead a byproduct of animistic bias or the agency-detector mechanism, the tendency to be paranoid, which is adaptive because it can save your life. Humans did not evolve to be religious; they evolved to be paranoid. And humans are religious because they are paranoid. (…)”

Satoshi Kanazawa, Why do we believe in God?, Psychology Today, March 28, 2008. (More). See also: Martie G. Haselton and David M. Buss, Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading, University of Texas at Austin (pdf)

Michael Shermer: The pattern behind self-deception | TED.com

In this video Michael Shermer says the human tendency to believe strange things — from alien abductions to dowsing rods — boils down to two of the brain’s most basic, hard-wired survival skills. He explains what they are, and how they get us into trouble. Michael Shermer debunks myths, superstitions and urban legends, and explains why we believe them. (Source: TED.com, Feb 2010.

See also: Why people believe in strange things, Lapidarium resume