Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Aug
10th
Fri
permalink

God and the Ivory Tower. What we don’t understand about religion just might kill us

                   
(Illustration: Medieval miniature painting of the Siege of Antioch (1490). The Crusades were a series of a military campaigns fought mainly between Christian Europe and Muslims. Shown here is a battle scene from the First Crusade.)

"The era of world struggle between the great secular ideological -isms that began with the French Revolution and lasted through the Cold War (republicanism, anarchism, socialism, fascism, communism, liberalism) is passing on to a religious stage. Across the Middle East and North Africa, religious movements are gaining social and political ground, with election victories by avowedly Islamic parties in Turkey, Palestine, Egypt, Tunisia, and Morocco. As Israel’s National Security Council chief, Gen. Yaakov Amidror (a religious man himself), told me on the eve of Tunisia’s elections last October, “We expect Islamist parties to soon dominate all governments in the region, from Afghanistan to Morocco, except for Israel.”

On a global scale, Protestant evangelical churches (together with Pentacostalists) continue to proliferate, especially in Latin America, but also keep pace with the expansion of fundamentalist Islam in southern Africa and eastern and southern Asia. In Russia, a clear majority of the population remains religious despite decades of forcibly imposed atheism. Even in China, where the government’s commission on atheism has the Sisyphean job of making that country religion-free, religious agitation is on the rise. And in the United States, a majority says it wants less religion in politics, but an equal majority still will not vote for an atheist as president.

But if reams of social scientific analysis have been produced on religion’s less celestial cousins — from the nature of perception and speech to how we rationalize and shop — faith is not a matter that rigorous science has taken seriously. To be sure, social scientists have long studied how religious practices correlate with a wide range of economic, social, and political issues. Yet, for nearly a century after Harvard University psychologist William James’s 1902 masterwork, The Varieties of Religious Experience, there was little serious investigation of the psychological structure or neurological and biological underpinnings of religious belief that determine how religion actually causes behavior. And that’s a problem if science aims to produce knowledge that improves the human condition, including a lessening of cultural conflict and war.

Religion molds a nation in which it thrives, sometimes producing solidarity and sacred causes so powerful that citizens are willing to kill or die for a common good (as when Judea’s Jews around the time of Christ persisted in rebellion unto political annihilation in the face of the Roman Empire’s overwhelmingly military might). But religion can also hinder a society’s ability to work out differences with others, especially if those others don’t understand what religion is all about. That’s the mess we find ourselves in today, not only among different groups of Americans in the so-called culture wars, but between secular and Judeo-Christian America and many Muslim countries.

Time and again, countries go to war without understanding the transcendent drives and dreams of adversaries who see a very different world. Yet we needn’t fly blindly into the storm.

Science can help us understand religion and the sacred just as it can help us understand the genome or the structure of the universe. This, in turn, can make policy better informed.

Fortunately, the last few years show progress in scientific studies of religion and the sacred, though headwinds remain strong. Across history and cultures, religion has often knit communities together under the rule of sentient, but immaterial deities — that is, spiritual beings whose description is logically contradictory and empirically unfalsifiable. Cross-cultural studies pioneered by anthropologist Pascal Boyer show that these miraculous features — talking bushes, horses that leap into the sky — make lasting impressions on people and thereby increase the likelihood that they will be passed down to the next generation. Implausibility also facilitates cultural transmission in a more subtle manner — fostering adaptability of religious beliefs by opening the door to multiple interpretations (as with metaphors or weekly sermons).

And the greater the investment in outlandishness, the better. This is because adherence to apparently absurd beliefs means incurring costs — surviving without electricity, for example, if you are Amish — which help identify members who are committed to the survival of a group and cannot be lured away. The ease of identifying true believers, in turn, builds trust and galvanizes group solidarity for common defense.

To test this hypothesis, anthropologist Richard Sosis and his colleagues studied 200 communes founded in the United States in the 19th century. If shared religious beliefs really did foster loyalty, they reasoned, then communes formed out of religious conviction should survive longer than those motivated by secular ideologies such as socialism. Their findings were striking: Just 6 percent of the secular communes were still functioning 20 years after their founding, compared with 39 percent of the religious communes.

It is not difficult to see why groups formed for purely rational reasons can be more vulnerable to collapse: Background conditions change, and it might make sense to abandon one group in favor of another. Interestingly, recent research echoes the findings of 14th-century historian Ibn Khaldun, who argued that long-term differences among North African Muslim dynasties with comparable military might “have their origin in religion … [and] group feeling [wherein] mutual cooperation and support flourish.” The more religious societies, he argued, endured the longest.

For this reason, even ostensibly secular countries and transnational movements usually contain important quasi-religious rituals and beliefs. Think of sacred songs and ceremonies, or postulations that “providence” or “nature” bestows equality and inalienable rights (though, for about 99.9 percent of our species’ existence, slavery, and oppression of minorities were more standard fare). These sacred values act as moral imperatives that inspire nonrational sacrifices in cooperative endeavors such as war.

Insurgents, revolutionaries, and terrorists all make use of this logic, generating outsized commitment that allows them to resist and often prevail against materially stronger foes. Consider the American revolutionaries who defied the greatest empire of their age by pledging “our Lives, our Fortunes and our sacred Honor” for the cause of “liberty or death.” Surely they were aware of how unlikely they were to succeed, given the vast disparities in material resources, manpower, and training. As Osama Hamdan, the ranking Hamas politburo member for external affairs, put it to me in Damascus, Syria, “George Washington was fighting the strongest military in the world, beyond all reason. That’s what we’re doing. Exactly.”

But the same logic that makes religious and sacred beliefs more likely to endure can make them impervious to compromise. Based on interviews, experiments, and surveys with Palestinians, Israelis, Indonesians, Indians, Afghans, and Iranians, my research with psychologists Jeremy Ginges, Douglas Medin, and others demonstrates that offering people material incentives (large amounts of money, guarantees for a life free of political violence) to compromise sacred values can backfire, increasing stated willingness to use violence. Such backfire effects occur both for convictions with clear religious investment (Jerusalem, sharia law) and for those that are at least initially nonreligious (Iran’s right to a nuclear capability, Palestinian refugees’ right of return).

According to a 2010 study, for example, most Iranians think there is nothing sacred about their government’s nuclear program. But for a sizable minority — 13 percent of the population — the quest for a nuclear capability (more focused on energy than weapons) had, through religious rhetoric, become a sacred subject. This group, which tends to be close to the regime, now believes a nuclear program is bound up with national identity and with Islam itself. As a result, offering material rewards or punishments to abandon the program only increases anger and support for it.

Although this sacralization of initially secular issues confounds standard “business-like” negotiation tactics, my work with political scientist Robert Axelrod interviewing political leaders in the Middle East and elsewhere indicates that strong symbolic gestures (sincere apologies, demonstrating respect for the other’s values) generate surprising flexibility, even among militants, and may enable subsequent material negotiations. Thus, we find that Palestinian leaders and their supporting populations are generally willing to accept Israeli offers of economic improvement only after issues of recognition are addressed. Even purely symbolic statements accompanied by no material action, such as “we recognize your suffering” or “we respect your rights in Jerusalem,” diminish support for violence, including suicide terrorism. This is particularly promising because symbolic gestures tied to religious notions that are open to interpretation might potentially be reframed without compromising their absolute “truth.” For example, Jerusalem might be reconceived less as a place than portal to heaven, where earthly access to the portal suffices.

If these things are worth knowing, why do scientists still shun religion?

Part of the reason is that most scientists are staunchly nonreligious. If you look at the prestigious U.S. National Academy of Sciences or Britain’s Royal Society, well over 90 percent of members are non-religious. That may help explain why some of the bestselling books by scientists about religion aren’t about the science of religion as much as the reasons that it’s no longer necessary to believe. “New Atheists” have aggressively sought to discredit religion as the chief cause of much human misery, militating for its demise. They contend that science has now answered questions about humans’ origins and place in the world that only religion sought to answer in the days before evolutionary science, and that humankind no longer needs the broken crutch of faith.

But the idea that we can simply argue away religion has little factual support. Although a recent study by psychologists Will Gervais and Ara Norenzayan indicates that people are less prone to think religiously when they think analytically, other studies suggest that seemingly contrary evidence rarely undermines religious belief, especially among groups welded by ritualized sacrifice in the face of outside threats. Norenzayan and others also find that belief in gods and miracles intensifies when people are primed with awareness of death or when facing danger, as in wartime.

Moreover, the chief complaint against religion — that it is history’s prime instigator of intergroup conflict — does not withstand scrutiny. Religious issues motivate only a small minority of recorded wars. The Encyclopedia of Wars surveyed 1,763 violent conflicts across history; only 123 (7 percent) were religious. A BBC-sponsored “God and War" audit, which evaluated major conflicts over 3,500 years and rated them on a 0-to-5 scale for religious motivation (Punic Wars = 0, Crusades = 5), found that more than 60 percent had no religious motivation. Less than 7 percent earned a rating greater than 3. There was little religious motivation for the internecine Russian and Chinese conflicts or the world wars responsible for history’s most lethal century of international bloodshed.

Indeed, inclusive concepts such as “humanity” arguably emerged with the rise of universal religions. Sociologist Rodney Stark reveals that early Christianity became the Roman Empire’s majority religion not through conquest, but through a social process grounded in trust. Repeated acts of altruism, such as caring for non-Christians during epidemics, facilitated the expansion of social networks that were invested in the religion. Likewise, studies by behavioral economist Joseph Henrich and colleagues on contemporary foragers, farmers, and herders show that professing a world religion is correlated with greater fairness toward passing strangers. This research helps explain what’s going on in sub-Saharan Africa, where Islam is spreading rapidly. In Rwanda, for example, people began converting to Islam in droves after Muslims systematically risked their lives to protect Christians and animists from genocide when few others cared.

Although surprisingly few wars are started by religions, once they start, religion — and the values it imposes — can play a critical role. When competing interests are framed in terms of religious and sacred values, conflict may persist for decades, even centuries. Disputes over otherwise mundane phenomena then become existential struggles, as when land becomes “Holy Land.” Secular issues become sacralized and nonnegotiable, regardless of material rewards or punishments. In a multiyear study, our research group found that Palestinian adolescents who perceived strong threats to their communities and were highly involved in religious ritual were most likely to see political issues, like the right of refugees to return to homes in Israel, as absolute moral imperatives. These individuals were thus opposed to compromise, regardless of the costs. It turns out there may be a neurological component to such behavior: Our work with Gregory Berns and his neuroeconomics team suggests that such values are processed in the brain as duties rather than utilitarian calculations; neuroimaging reveals that violations of sacred values trigger emotional responses consistent with sentiments of moral outrage.

Historical and experimental studies suggest that the more antagonistic a group’s neighborhood, the more tightly that group will cling to its sacred values and rituals. The result is enhanced solidarity, but also increased potential for conflict toward other groups. Investigation of 60 small-scale societies reveals that groups that experience the highest rates of conflict (warfare) endure the costliest rites (genital mutilation, scarification, etc.). Likewise, research in India, Mexico, Britain, Russia, and Indonesia indicates that greater participation in religious ritual in large-scale societies is associated with greater parochial altruism — that is, willingness to sacrifice for one’s own group, such as Muslims or Christians, but not for outsiders — and, in relevant contexts, support for suicide attacks. This dynamic is behind the paradoxical reality that the world finds itself in today: Modern global multiculturalism is increasingly challenged by fundamentalist movements aimed at reviving group loyalty through greater ritual commitments to ideological purity.

So why does it matter that we have moved past the -isms and into an era of greater religiosity? In an age where religious and sacred causes are resurgent, there is urgent need for scientific effort to understand them. Now that humankind has acquired through science the power to destroy itself with nuclear weapons, we cannot afford to let science ignore religion and the sacred, or let scientists simply try to reason them away. Policymakers should leverage scientific understanding of what makes religion so potent a force for both cooperation and conflict, to help increase the one and lessen the other.

Scott Atran, American and French anthropologist at France’s National Center for Scientific Research, the University of Michigan, John Jay College, and ARTIS Research who has studied violence and interviewed terrorists, God and the Ivory Tower, Foreign Policy, Aug 6, 2012.

See also:

Scott Atran on Why War Is Never Really Rational
‘We’ vs ‘Others’: Russell Jacoby on why we should fear our neighbors more than strangers
The Psychology of Violence (a modern rethink of the psychology of shame and honour in preventing it), Lapidarium notes
Religion tag on Lapidarium notes

May
17th
Thu
permalink

E. O. Wilson on human evolution, altruism and a ‘new Enlightenment’

                image      

“History makes no sense without prehistory, and prehistory makes no sense without biology.”

— E. O. Wilson, Seminars About Long-term Thinking, The Long Now Foundation, Apr 20, 2012.

"Scientific advances are now good enough for us to address coherently questions of where we came from and what we are. But to do so, we need to answer two more fundamental questions. The first is why advanced social life exists in the first place and has occurred so rarely. The second is what are the driving forces that brought it into existence.

A conflict between individual and group-selected traits

"Only the understanding of evolution offers a chance to get a real understanding of the human species. We are determined by the interplay between individual and group selection where individual selection is responsible for much of what we call sin, while group selection is responsible for the greater part of virtue. We’re all in constant conflict between self-sacrifice for the group on the one hand and egoism and selfishness on the other. I go so far as to say that all the subjects of humanities, from law to the creative arts are based upon this play of individual versus group selection. (…) And it is very creative and probably the source of our striving, our inventiveness and imagination. It’s that eternal conflict that makes us unique.

Q: So how do we negotiate this conflict?

E O. W: We don’t. We have to live with it.

Q: Which element of this human condition is stronger?

E O. W: Let’s put it this way: If we would be mainly influenced by group selection, we would be living in kind of an ant society. (…)

Q: What determines which ideology is predominant in a society?

E O. W: If your territory is invaded, then cooperation within the group will be extreme. That’s a human instinct. If you are in a frontier area, however, then we tend to move towards the extreme individual level. That seems to be a good part of the problem still with America. We still think we’re on the frontier, so we constantly try to put forward individual initiative and individual rights and rewards based upon individual achievement. (…)”

Edward O. Wilson, American biologist, researcher (sociobiologybiodiversity), theorist (consiliencebiophilia), naturalist (conservationist) and author, Interview with Edward O. Wilson: The Origin of Morals, Der Spiegel, 2013

Eusociality, where some individuals reduce their own reproductive potential to raise others’ offspring, is what underpins the most advanced form of social organization and the dominance of social insects and humans. One of the key ideas to explain this has been kin selection theory or inclusive fitness, which argues that individuals cooperate according to how they are related. I have had doubts about it for quite a while. Standard natural selection is simpler and superior. Humans originated by multilevel selection—individual selection interacting with group selection, or tribe competing against tribe. We need to understand a great deal more about that. (…)

We should consider ourselves as a product of these two interacting and often competing levels of evolutionary selection. Individual versus group selection results in a mix of altruism and selfishness, of virtue and sin, among the members of a society. If we look at it that way, then we have what appears to be a pretty straightforward answer as to why conflicted emotions are at the very foundation of human existence. I think that also explains why we never seem to be able to work things out satisfactorily, particularly internationally.

Q: So it comes down to a conflict between individual and group-selected traits?

Yes. And you can see this especially in the difficulty of harmonizing different religions. We ought to recognize that religious strife is not the consequence of differences among people. It’s about conflicts between creation stories. We have bizarre creation myths and each is characterized by assuring believers that theirs is the correct story, and that therefore they are superior in every sense to people who belong to other religions. This feeds into our tribalistic tendencies to form groups, occupy territories and react fiercely to any intrusion or threat to ourselves, our tribe and our special creation story. Such intense instincts could arise in evolution only by group selection—tribe competing against tribe. For me, the peculiar qualities of faith are a logical outcome of this level of biological organization.

Q: Can we do anything to counter our tribalistic instincts?

I think we are ready to create a more human-centered belief system. I realize I sound like an advocate for science and technology, and maybe I am because we are now in a techno-scientific age. I see no way out of the problems that organized religion and tribalism create other than humans just becoming more honest and fully aware of themselves. Right now we’re living in what Carl Sagan correctly termed a demon-haunted world. We have created a Star Wars civilization but we have Paleolithic emotions, medieval institutions and godlike technology. That’s dangerous. (…)

I’m devoted to the kind of environmentalism that is particularly geared towards the conservation of the living world, the rest of life on Earth, the place we came from. We need to put a lot more attention into that as something that could unify people. Surely one moral precept we can agree on is to stop destroying our birthplace, the only home humanity will ever have.

Q: Do you believe science will help us in time?

We can’t predict what science is going to come up with, particularly on genuine frontiers like astrophysics. So much can change even within a single decade. A lot more is going to happen when the social sciences finally join the biological sciences: who knows what will come out of that in terms of describing and predicting human behavior? But there are certain things that are almost common sense that we should not do.

Q: What sort of things shouldn’t we do?

Continue to put people into space with the idea that this is the destiny of humanity. It makes little sense to continue exploration by sending live astronauts to the moon, and much less to Mars and beyond. It will be far cheaper, and entail no risk to human life, to explore space with robots. It’s a commonly stated idea that we can have other planets to live on once we have used this one up. That is nonsense. We can find what we need right here on this planet for almost infinite lengths of time, if we take good care of it.

A New Enlightenment

Q: What is it important to do now?

The title of my final chapter is “A New Enlightenment”. I think we ought to have another go at the Enlightenment and use that as a common goal to explain and understand ourselves, to take that self-understanding which we so sorely lack as a foundation for what we do in the moral and political realm. This is a wonderful exercise. It is about education, science, evaluating the creative arts, learning to control the fires of organized religion and making a better go of it.

Q: Could you be more concrete about this new Enlightenment?

I would like to see us improving education worldwide and putting a lot more emphasis—as some Asian and European countries have—on science and technology as part of basic education.”

E. O. Wilson, E. O. Wilson: from altruism to a new Enlightenment, New Scientist, 24 April 2012.

"I think science is now up to the job. We need to be harnessing our scientific knowledge now to get a better, science-based self-understanding.

Q: It seems that, in this process, you would like to throw religions overboard altogether?

E O. W: No. That’s a misunderstanding. I don’t want to see the Catholic Church with all of its magnificent art and rituals and music disappear. I just want to have them give up their creation stories, including especially the resurrection of Christ.

Q: That might well be a futile endeavour …

E O. W: There was this American physiologist who was asked if Mary’s bodily ascent from Earth to Heaven was possible. He said, “I wasn’t there; therefore, I’m not positive that it happened or didn’t happen; but of one thing I’m certain: She passed out at 10,000 meters.” That’s where science comes in. Seriously, I think we’re better off with no creation stories.

Q: With this new Enlightenment, will we reach a higher state of humanity?

E O. W: Do we really want to improve ourselves? Humans are a very young species, in geologic terms, and that’s probably why we’re such a mess. We’re still living with all this aggression and ability to go to war. But do we really want to change ourselves? We’re right on the edge of an era of being able to actually alter the human genome. But do we want that? Do we want to create a race that’s more rational and free of many of these emotions? My response is no, because the only thing that distinguishes us from super-intelligent robots are our imperfect, sloppy, maybe even dangerous emotions. They are what makes us human.”

Edward O. Wilson, American biologist, researcher (sociobiologybiodiversity), theorist (consiliencebiophilia), naturalist (conservationist) and author, Interview with Edward O. Wilson: The Origin of Morals, originally in P. Bethge, J. Grolle, Wir sind ein Schlamassel, Der Spiegel, 8/2013.

A “Social Conquest of the Earth”

"Q: What are some striking examples for you of the legacy of this evolutionary process?

Almost everything. All the way from passion at football games to war to the constant need to suppress selfish behavior that ranges over into criminal behavior to the necessary extolling of altruism by groups, to group approval and reward of people who are heroes or altruists.

Constant turmoil occurs in modern human societies and what I’m suggesting is that turmoil is endemic in the way human advanced social behavior originated in the first place. It’s by group selection that occurred favoring altruism versus individual level selection, which by and large, not exclusively, favor individual and selfish behavior.

We’re hung in the balance. We’ll never reach either one extreme or the other. One extreme would take us to the level of ants and bees and the other would mean that you have dissolution of society.

Q: One point you make in your book is that this highly social kind of behavior that we’ve evolved has allowed us to be part of the social conquest of earth, but it’s also had an unfortunate effect of endangering a lot of the world’s biodiversity. Does that make you pessimistic? If this is just part of the way we’ve evolved, is there going to be any way out of it?

That’s a very big question. In other words, did the pathway that led us to advanced social behavior and conquest make it inevitable that we will destroy most of what we’ve conquered? That is the question of questions.

I’m optimistic. I think that we can pass from conquerors to stewards. We have the intellectual and moral capacity to do it, but I’ve also felt very strongly that we needed a much better understanding of who we are and where we came from. We need answers to those questions in order to get our bearings toward a successful long-term future, that means a future for ourselves, our species and for the rest of life.

I realize that sounds a little bit like it’s coming from a pulpit but basically that’s what I’ve had in my mind. In writing A Social Conquest of Earth, I very much had in mind that need for self-understanding, and I thought we were very far short, and we remain very far short, of self-understanding. We have a kind of resistance toward honest self-understanding as a species and I think that resistance is due in part to our genetic history. And now, can we overcome it? I think so.”

E. O. Wilson, American biologist, researcher in sociobiology, biodiversity, theorist, naturalist and author, interviewed by Carl Zimmer, What Does E.O. Wilson Mean By a “Social Conquest of the Earth”, Smithsonian.com, March 22, 2012

See also:

Edward O. Wilson “The Social Conquest of Earth”, Fora.tv video, 20 Apr 2012
☞ Richard Dawkins, The descent of Edward Wilson. “A new book on evolution by a great biologist makes a slew of mistakes”, Prospect, May 24, 2012
The Original Colonists, The New York Times, May 11, 2012:
“Mythmaking could never discover the origin and meaning of humanity” — and contemporary philosophy is also irrelevant, having “long ago abandoned the foundational questions about human existence.” The proper approach to answering these deep questions is the application of the methods of science, including archaeology, neuroscience and evolutionary biology. Also, we should study insects.”
Sam Harris on the ‘selfish gene’ and moral behavior
Anthropocene: “the recent age of man”. Mapping Human Influence on Planet Earth, Lapidarium notes
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate
On the Origins of the Arts , Sociobiologist E.O. Wilson on the evolution of culture, Harvard Magazine, May-June, 2013
Anthropology tag on Lapidarium notes
Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Mar
18th
Sun
permalink

Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man

                

"We’re fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. (…)

At the top of the list of things we do that we’re supposed to be doing, and that are at the core of what it is to be human rather than some other sort of animal, are language and music. Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we’re brilliantly capable at absorbing them, and from a young age. That’s how we know we’re meant to be doing them, i.e., how we know we evolved brains for engaging in language and music.

But what if this gets language and music all wrong? What if we’re not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? (…)

I believe that language and music are, indeed, not part of our core—that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved—culturally evolved over millennia—for us. Our brains aren’t shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains.

But how on Earth can one argue for such a view? If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be?

They’d have to possess the auditory structure of…nature. That is, we have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness—to “nature-harness,” as I call it—our brain.

And language and music do nature-harness. (…) The two most important classes of auditory stimuli for humans are (i) events among objects (most commonly solid objects), and (ii) events among humans (i.e., human behavior). And, in my research I have shown that the signature sounds in these two auditory domains drive the sounds we humans use in (i) speech and (ii) music, respectively.

For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book I provide a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book I run through a variety of specific predictions.

Here’s one. If melody is “trying” to sound like the Doppler shifts of a mover—and thereby convey to the auditory system the trajectory of a fictional mover—then a faster mover will have a greater difference between its top and bottom pitch. Does faster music tend to have a wider tessitura? That is, does music with a faster tempo—more beats, or footsteps, per second—tend to have a wider tessitura? Notice that the performer of faster tempo music would ideally like the tessitura to narrow, not widen! But what we found is that, indeed, music having a greater tempo tends to have a wider tessitura, just what one would expect if the meaning of melody is the direction of a mover in your midst.

The preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior!

That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (i.e., of not having their origins via natural selection)…and the giveaway is, ironically, that their shapes are natural (i.e., have the structure of natural auditory events).

We also find this for another core capability that we know we’re not “meant” to do: reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30,000 years), and yet reading has the hallmarks of instinct. As I have argued in my research and in my second book, The Vision Revolution, writing slides so well into our brain because it got shaped by cultural evolution to look “like nature,” and, specifically, to have the signature contour-combinations found in natural scenes (which consists mostly of opaque objects strewn about).

My research suggests that language and music aren’t any more part of our biological identity than reading is. Counterintuitively, then, we aren’t “supposed” to be speaking and listening to music. They aren’t part of our “core” after all.

Or, at least, they aren’t part of the core of Homo sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original Homo sapiens.

So, what is it to be human? Unlike Homo sapiens, we’re grown in a radically different petri dish. Our habitat is filled with cultural artifacts—the two heavyweights being language and music—designed to harness our brains’ ancient capabilities and transform them into new ones.

Humans are more than Homo sapiens. Humans are Homo sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution.

Mark Changizi, an evolutionary neurobiologist, Are We “Meant” to Have Language and Music?, Discover Magazine, March 15th, 2012. (Illustration: Harnessed)

See also:

Mark Changizi, Music Sounds Like Moving People, Science 2.0, Jan 10, 2010.
☞ Mark Changizi, How To Put Art And Brain Together
☞ Mark Changizi, How we read
Mark Changizi on brain’s perception of the world
A brief history of writing, Lapidarium notes
Mark Changizi on Humans, Version 3.0.

Jan
21st
Sat
permalink

'Human beings are learning machines,' says philosopher (nature vs. nurture)

                       

"The point is that in scientific writing (…) suggest a very inflexible view of human nature, that we are determined by our biology. From my perspective the most interesting thing about the human species is our plasticity, our flexibility. (…)

It is striking in general that human beings mistake the cultural for the natural; you see it in many domains. Take moral values. We assume we have moral instincts: we just know that certain things are right and certain things are wrong. When we encounter people whose values differ from ours we think they must be corrupted or in some sense morally deformed. But this is clearly an instance where we mistake our deeply inculcated preferences for natural law. (…)

Q: At what point with morality does biology stop and culture begin?

One important innate contribution to morality is emotions. An aggressive response to an attack is not learned, it is biological. The question is how emotions that are designed to protect each of us as individuals get extended into generalised rules that spread within a group. One factor may be imitation. Human beings are great imitative learners. Rules that spread in a family can be calibrated across a whole village, leading to conformity in the group and a genuine system of morality.

Nativists will say that morality can emerge without instruction. But with innate domains, there isn’t much need for instruction, whereas in the moral domain, instruction is extensive. Kids learn through incessant correction. Between the ages of 2 and 10, parents correct their children’s behaviour every 8 minutes or so of waking life. In due course, our little monsters become little angels, more or less. This gives us reason to think morality is learned.

Q: One of the strongest arguments for innateness comes from linguists such as Noam Chomsky, who argue that humans are born with the basic rules of grammar already in place. But you disagree with them?

Chomsky singularly deserves credit for giving rise to the new cognitive sciences of the mind. He was instrumental in helping us think about the mind as a kind of machine. He has made some very compelling arguments to explain why everybody with an intact brain speaks grammatically even though children are not explicitly taught the rules of grammar.

But over the past 10 years we have started to see powerful evidence that children might learn language statistically, by unconsciously tabulating patterns in the sentences they hear and using these to generalise to new cases. Children might learn language effortlessly not because they possess innate grammatical rules, but because statistical learning is something we all do incessantly and automatically. The brain is designed to pick up on patterns of all kinds.

Q: How hard has it been to put this alternative view on the table, given how Chomskyan thought has dominated the debate in recent years?

Chomsky’s views about language are so deeply ingrained among academics that those who take statistical learning seriously are subject to a kind of ridicule. There is very little tolerance for dissent. This has been somewhat limiting, but there is a new generation of linguists who are taking the alternative very seriously, and it will probably become a very dominant position in the next generation.

Q: You describe yourself as an “unabashed empiricist” who favours nurture over nature. How did you come to this position, given that on many issues the evidence is still not definitive either way?

Actually I think the debate has been settled. You only have to stroll down the street to see that human beings are learning machines. Sure, for any given capacity the debate over biology versus culture will take time to resolve. But if you compare us with other species, our degree of variation is just so extraordinary and so obvious that we know prior to doing any science that human beings are special in this regard, and that a tremendous amount of what we do is as a result of learning. So empiricism should be the default position. The rest is just working out the details of how all this learning takes place.

Q: What are the implications of an empirical understanding of human nature for the way we go about our lives. How should it affect the way we behave?

In general, we need to cultivate a respect for difference. We need to appreciate that people with different values to us are not simply evil or ignorant, and that just like us they are products of socialisation. This should lead to an increase in international understanding and respect. We also need to understand that group differences in performance are not necessarily biologically fixed. For example, when we see women performing less well than men in mathematics, we should not assume that this is because of a difference in biology.

Q: How much has cognitive science contributed to our understanding of what it is to be human, traditionally a philosophical question?

Cognitive science is in the business of settling long-running philosophical debates on human nature, innate knowledge and other issues. The fact that these theories have been churning about for a couple of millennia without any consensus is evidence that philosophical methods are better at posing questions than answering them. Philosophy tells us what is possible, and science tells us what is true.

Cognitive science has transformed philosophy. At the beginning of the 20th century, philosophers changed their methodology quite dramatically by adopting logic. There has been an equally important revolution in 21st-century philosophy in that philosophers are turning to the empirical sciences and to some extent conducting experimental work themselves to settle old questions. As a philosopher, I hardly go a week without conducting an experiment.

My whole working day has changed because of the infusion of science.”

Jesse Prinz is a distinguished professor of philosophy at the City University of New York, specialising in the philosophy of psychology. He is a pioneer in experimental philosophy, using findings from the cognitive sciences, anthropology and other fields to develop empiricist theories of how the mind works. He is the author of The Emotional Construction of Morals (Oxford University Press, 2007), Gut Reactions (OUP, 2004) and Furnishing the Mind (MIT Press, 2002) and Beyond Human Nature: How culture and experience make us who we are, 'Human beings are learning machines,' says philosopher, NewScientist, Jan 20, 2012. (Illustration: Fritz Kahn, British Library)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

Dec
17th
Sat
permalink

Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers


A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity



Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity, TED.com, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Nov
25th
Fri
permalink

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language


                                        Jamie Marie Waelchli, Thought Map No. 8

Human language, coupled with human maternal care, enables the consciousness to bifurcate very early and extensively. Without the self-reflective properties inherent in a reflexive agent- recipient language, and without the objectification of the human infant — a very different kind of humanity would arise.

Human consciousness, as constructed by human language, becomes the vehicle through which the self-reflective human mind envisions time. Language enables the viewer to reflect upon the actions of the doer (and the actions of one’s internal body), while projecting forward and backward — other possible bodily actions — into imagined space/time. Thus the projected and imagined space/time increasingly becomes the conscious world and reality of the viewer who imagines or remembers actions mapped onto that projected plan. The body thus becomes a physical entity progressing through the imaged world of the viewer. As the body progresses through this imaged world, the viewer also constructs a way to mark progress from one imagined event to another. Having once marked this imagined time into units, the conscious viewer begins to order the anticipated actions of the body into a linear progression of events.

A personal narrative then arises through the vehicle of language. Indeed a personal narrative is required, expected and placed upon every human being, by the very nature of human language. This personal narrative becomes organized around the anticipated bodily changes that it is imagined will take place from birth to old age. The power of the bifurcated mind, through linguistically encoded expectancies, shapes and molds all of human behavior. When these capacities are jointly executed by other similar minds — the substrate of human culture is manufactured.

Human culture, because it rides upon a manufactured space/time self-reflective substrate, is unique. Though it shares some properties with animal culture, it is not merely a natural Darwinian extension of animal culture. It is based on constructed time/space, constructed mental relationships, constructed moral responsibilities, and constructed personal narratives — and individuals, must, at all times, justify their actions toward another on the basis of their co-constructed expectancies.

Human Consciousness seems to burst upon the evolutionary scene in something of an explosion between 40,000 and 90,000 years ago. Trading emerges, art emerges, and symboling ability emerges with a kind of intensity not noted for any previous time in the archeological record. (…)

Humans came with a propensity to alter the world around them wherever they went. We were into object manipulation in all aspects of our existence, and wherever we went we altered the landscape. We did not accept the natural world as we found it — we set about refashioning our worlds according to our own needs and desires. From the simple act of intentionally setting fires to eliminate underbrush, to the exploration of outer space, humanity manifested the view that it was here to control its own destiny, by changing the world around it, as well as by individuals’ changing their own appearances.

We put on masks and masqueraded about the world, seeking to make the world conform to our own desires, in a way no other species emulated. In brief, the kind of language that emerged between 40,000 and 90,000 years ago, riding upon the human anatomical form, changed us forever, and we began to pass that change along to future generations.

While Kanzi and family are bonobos, the kind of language they have acquired — even if they have not manifested all major components yet — is human language as you and I speak it and know it. Therefore, although their biology remains that of apes, their consciousness has begun to change as a function of the language, the marks it leaves on their minds and the epigenetic marks it leaves on the next generation. (Epigenetic: chemical markers which become attached to segments of genes during the lifetime of an individual are passed along to future generations, affecting which genes will be expressed in succeeding generations.) They explore art, they explore music, they explore creative linguistic negotiation, they have an autobiographical past and they think about the future. They don’t do all these things with human-like proficiency at this point, but they attempt them if given opportunity. Apes not so reared do not attempt to do these things.

What kind of power exists within the kind of language we humans have perfected? Does it have the power to change biology across time, if it impacts the biological form upon conception? Science has now become aware of the power of initial conditions, through chaos theory, the work of Mandelbrot with fractal geometric forms, and the work of Wolfram and the patterns that can be produced by digital reiterations of simple and only slightly different starting conditions. Within the fertilized egg lie the initial starting conditions of every human.

We also now realize that epigenetic markers from parental experience can set these initial starting conditions, determining such things as the order, timing, and patterning of gene expression profiles in the developing organism. Thus while the precise experience and learning of the parents is not passed along, the effects of those experiences, in the form of genetic markers that have the power to affect the developmental plan of the next generation during the extraordinarily sensitive conditions of embryonic development, are transmitted. Since language is the most powerful experience encountered by the human being and since those individuals who fail to acquire human language are inevitably excluded from (or somehow set apart in) the human community, it is reasonable to surmise that language will, in some form, transmit itself through epigenetic mechanisms.

When a human being enters into a group of apes and begins to participate in the rearing of offspring, different epigenetic markers have the potential to become activated. We already know, for example, that in human beings, expectancies or beliefs can affect gene activity. The most potent of the epigenetic markers would most probably arise from the major difference between human and ape infants. Human infants do not cling, ape infants do. When ape infants are carried like human infants, they begin to development eye/hand coordination from birth. This sets the developmental trajectory of the ape infant in a decidedly human direction — that of manipulating the world around it. Human mothers, unlike ape mothers, also communicate their intentions linguistically to the infant. Once an intention is communicated linguistically, it can be negotiated, so there arises an intrinsic motivation to tune into and understand such communications on the part of the ape infant. The ‘debate’ in ape language, which has centered around do they have or don’t they — has missed the point. This debate has ignored the key rearing variables that differ dramatically across the studies. Apart from Kanzi and family, all other apes in these studies are left alone at night and drilled on associative pairings during the day.”

Sue Savage-Rumbaugh, is a primatologist most known for her work with two bonobos, Kanzi and Panbanisha, investigating their use of “Great Ape language” using lexigrams and computer-based keyboards. Until recently based at Georgia State University’s Language Research Center in Atlanta.

To read full essay click Human Language—Human Consciousness, National Humanities Center, Jan 2nd, 2011

See also:

John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices
Do thoughts have a language of their own? The language of thought hypothesis, Lapidarium notes

Nov
11th
Fri
permalink

The Genographic Project ☞ A Landmark Study of the Human Journey 


                                       (Click image to explore Atlas of Human Journey)

Human Migration, Population Genetics, Maps, DNA.

"Where do you really come from? And how did you get to where you live today? DNA studies suggest that all humans today descend from a group of African ancestors who—about 60,000 years ago—began a remarkable journey.

The Genographic Project is seeking to chart new knowledge about the migratory history of the human species by using sophisticated laboratory and computer analysis of DNA contributed by hundreds of thousands of people from around the world. In this unprecedented and of real-time research effort, the Genographic Project is closing the gaps of what science knows today about humankind’s ancient migration stories.

The Genographic Project is a multi-year research initiative led by National Geographic Explorer-in-Residence Dr. Spencer Wells. Dr. Wells and a team of renowned international scientists and IBM researchers, are using cutting-edge genetic and computational technologies to analyze historical patterns in DNA from participants around the world to better understand our human genetic roots.”


                                       (Click image to explore Globe of Human History)

The Genographic Project - Human Migration, Population Genetics, Maps, DNA, National Geographic

The Genographic Project - Introduction

     

See also:

Evolution of Language tested with genetic analysis

Sep
20th
Tue
permalink

Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

In this part of Peter Joseph's documentary Zeitgeist: Moving Forward “The discussion turns to human behavior and the nature vs. nurture debate. This portion begins with a small clip with Robert Sapolsky summing up the nature vs. nurture debate which he essentially refers it as a “false dichotomy.” After which he states that “it is virtually impossible to understand how biology works outside the context of environment.”

During which time the film then goes onto describe that it is neither Nature or Nurture that shapes human behavior but both are supposed to influence behavior. The interviewed pundits state that even with genetic predispositions to diseases the expression and manifestation of disease is largely determined by environmental stressors. Disease criminal activity and addictions are also placed in the same light. One study discussed showed that newly born babies are more likely to die if they are not touched. Another study which was mentioned claimed to show how stressed women were more likely to have children with addiction disorders. A reference is made to the unborn children who were in utero during the Dutch famine of 1944. The “Dutch Famine Birth Cohort Study" is mentioned to have shown that obesity and other health complications became common problems later in life due to prolonged starvation of their mother during pregnancy.

Comparisons are made by sociologists of criminals in different parts of the world and how different cultures with different values can often have more peaceful inhabitants. An Anabaptist sect called the Hutterites are mentioned to have never reported a homicide in any of their societies. The overall conclusion is that social environment and cultural conditioning play a large part in shaping human behavior.”

Zeitgeist Moving Forward I Human Nature

Dr. Gabor Maté: “Nothing is genetically programmed. There are very rare diseases, a small handful, extremely sparsely represented in the population, that are truly genetically determined. Most complex conditions might have a predisposition that has a genetic component. But a predisposition is not the same as a predetermination. The whole search for the source of diseases in the genome was doomed to failure before anybody even thought of it, because most diseases are not genetically predetermined. Heart disease, cancer, strokes, rheumatoid conditions, autoimmune conditions in general, mental health conditions, addictions, none of them are genetically determined. (…)

That’s an epigenetic effect. “Epi” means on top of, so that the epigenetic influence is what happens environmentally to either activate or deactivate certain genes. (…)

So, the genetic argument is simply a cop-out which allows us to ignore the social and economic and political factors that, in fact, underlie many troublesome behaviors. (…)

If we wish to understand what then makes some people susceptible we actually have to look at the life experience. The old idea, although it’s old but it’s still broadly held, that addictions are due to some genetic cause is simply scientifically untenable. What the case is actually is that certain life experiences make people susceptible. Life experiences that not only shape the person’s personality and psychological needs but also their very brains in certain ways. And that process begins in utero.

It has been shown, for example that if you stress mothers during pregnancy their children are more likely to have traits that predispose them to addictions and that’s because development is shaped by the psychological and social environment. So the biology of human beings is very much affected by and programmed by the life experiences beginning in utero.”

Dr. Robert Sapolsky: “Environment does not begin at birth. Environment begins as soon as you have an environment. As soon as you are a fetus, you are subject to whatever information is coming through mom’s circulations. Hormones levels of nutrients. (…) Be a Dutch Hunger Winter fetus and half a century later, everything else being equal, you are more likely to have high blood pressure, obesity or metabolic syndrome. That is environment coming in a very unexpected place. (…)”

GM: “The point about human development and specifically human brain development is that it occurs mostly under the impact of the environment and mostly after birth. (…)

The concept of Neural Darwinism simply means that the circuits that get the appropriate input from the environment will develop optimally and the ones that don’t will either not develop optimally or perhaps not at all. (…)

There is a significant way in which early experiences shape adult behavior and even and especially early experiences for which there is no recall memory. It turns out that there are two kinds of memory: there is explicit memory which is recall; this is when you can call back facts, details, episodes, circumstances. But the structure in the brain which is called the hippocampus which encodes recall memory doesn’t even begin to develop fully until a year and a half and it is not fully developed until much later, which is why hardly anybody has any recall memory prior to 18 months.

But there is another kind of memory which is called implicit memory which is, in fact, an emotional memory where the emotional impact and the interpretation the child makes of those emotional experiences are ingrained in the brain in the form of nerve circuits ready to fire without specific recall.  So to give you a clear example, people who are adopted have a lifelong sense of rejection very often. They can’t recall the adoption. They can’t recall the separation of the birth mother because there’s nothing there to recall with. But the emotional memory of separation and rejection is deeply embedded in their brains. Hence, they are much more likely to experience a sense of rejection and a great emotional upset when they perceive themselves as being rejected by other people. That’s not unique to people who are adopted but it is particularly strong in them because of this function of implicit memory. (…)

The great British child psychiatrist, D.W. Winnicott, said that fundamentally, two things can go wrong in childhood. One is when things happen that shouldn’t happen and then things that should happen but don’t. (…)

The Buddha argued that everything depends on everything else. He says ‘The one contains the many and the many contains the one.’ That you can’t understand anything in isolation from its environment, the leaf contains the sun, the sky and the earth, obviously. This has now been shown to be true, of course all around and specifically when it comes to human development. The modern scientific term for it is the ‘bio-psycho-social’ nature of human development which says that the biology of human beings depends very much on their interaction with the social and psychological environment.

And specifically, the psychiatrist and researcher Daniel Siegel at the University of California, Los Angeles, UCLA has coined a phrase Interpersonal Neurobiology” which means to say that the way that our nervous system functions depends very much on our personal relationships,  in the first place with the parenting caregivers, and in the second place with other important attachment figures in our lives and in the third-place, with our entire culture. So that you can’t separate the neurological functioning of a human being from the environment in which he or she grew up in and continues to exist in. And this is true throughout the life cycle. It’s particularly true when you are dependent and helpless when your brain is developing but it’s true even in adults and even at the end of life. (…)”

Dr. James Gilligan: “Violence is not universal. It is not symmetrically distributed throughout the human race. There is a huge variation in the amount of violence in different societies. There are some societies that have virtually no violence. There are others that destroy themselves. Some of the Anabaptist religious groups that are complete strict pacifists like the Amish, the Mennonites, the Hutterites, among some of these groups, the Hutterites - there are no recorded cases of homicide.

During our major wars, like World War II where people were being drafted they would refuse to serve in the military. They would go to prison rather than serve in the military. In the Kibbutzim in Israel the level of violence is so low that the criminal courts there will often send violent offenders - people who have committed crimes - to live on the Kibbutzim in order to learn how to live a non-violent life. Because that’s the way people live there. 

RS: So, we are amply shaped by society. Our societies, in the broader sense, including our theological, our metaphysical, our linguistic influences, etc, our societies help shape us as to whether or not we think life is basically about sin or about beauty; whether the afterlife will carry a price for how we live our lives or if it’s irrelevant. (…)

So, this brings us to a total impossible juncture which is to try to make sense in perspective science as to what that nature is of human nature. You know, on a certain level the nature of our nature is not to be particularly constrained by our nature. We come up with more social variability than any species out there. More systems of belief, of styles, of family structures, of ways of raising children. The capacity for variety that we have is extraordinary. (…)

GM: In a society which is predicated on competition and really, very often, the ruthless exploitation of one human being by another, the profiteering off of other people’s problems and very often the creation of problems for the purpose of profiteering, the ruling ideology will very often justify that behavior by appeals to some fundamental and unalterable human nature. So the myth in our society is that people are competitive by nature and that they are individualistic and that they’re selfish. The real reality is quite the opposite. We have certain human needs. The only way that you can talk about human nature concretely is by recognizing that there are certain human needs. We have a human need for companionship and for close contact, to be loved, to be attached to, to be accepted, to be seen, to be received for who we are. If those needs are met, we develop into people who are compassionate and cooperative and who have empathy for other people.

So the opposite, that we often see in our society, is in fact, a distortion of human nature precisely because so few people have their needs met. So, yes, you can talk about human nature but only in the sense of basic human needs that are instinctively evoked or I should say certain human needs that lead to certain traits if they are met and a different set of traits if they are denied.”

— Zeitgeist: Moving Forward - full transcript

Robert Sapolsky - American scientist and author. He is currently professor of Biological Sciences, and Professor of Neurology and Neurological Sciences and, by courtesy, Neurosurgery, at Stanford University.

Gabor Maté, Hungarian-born Canadian physician who specializes in the study and treatment of addiction and is also widely recognized for his unique perspective on Attention Deficit Disorder.

Richard Wilkinson - British researcher in social inequalities in health and the social determinants of health. He is Professor Emeritus of social epidemiology at the University of Nottingham.

James Gilligan - American psychiatrist and author, best known for his series of books entitled Violence, where he draws on 25 years of work in the American prison system to describe the motivation and causes behind violent behaviour. He now lectures at the Department of Psychiatry, New York University.

See also:

Zeitgeist: Moving Forward by Peter Joseph, 2011 (full documentary) (transcript)

Sep
19th
Mon
permalink

Steven Pinker on the History and decline of Violence

 

                                          Raphael, The Judgment of Solomon, (1518)

"Drawing on the work of the archaeologist Lawrence Keeley, Steven Pinker recently concluded that the chance of our ancient hunter-gatherer ancestors meeting a bloody end was somewhere between 15% and 60%. In the 20th century, which included two world wars and the mass killers Stalin and Hitler, the likelihood of a European or American dying a violent death was less than 1%.

Pinker shows that, with notable exceptions, the long-term trend for murder and violence has been going down since humans first developed agriculture 10,000 years ago. And it has dropped steeply since the Middle Ages. It may come as a surprise to fans of Inspector Morse but Oxford in the 1300s, Pinker tells us, was 110 times more murderous than it is today. With a nod to the German sociologist Norbert Elias, Pinker calls this movement away from killing the “civilising process”.”

Andrew Anthony, journalist, author, Steven Pinker: the optimistic voice of science, The Observer, 18 Sept 2011

"In sixteenth-century Paris, a popular form of entertainment was cat-burning, in which a cat was hoisted in a sling on a stage and slowly lowered into a fire. According to historian Norman Davies, "The spectators, including kings and queens, shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized." Today, such sadism would be unthinkable in most of the world. This change in sensibilities is just one example of perhaps the most important and most underappreciated trend in the human saga: Violence has been in decline over long stretches of history, and today we are probably living in the most peaceful moment of our species’ time on earth.

In the decade of Darfur and Iraq, and shortly after the century of Stalin, Hitler, and Mao, the claim that violence has been diminishing may seem somewhere between hallucinatory and obscene. Yet recent studies that seek to quantify the historical ebb and flow of violence point to exactly that conclusion.

Some of the evidence has been under our nose all along. Conventional history has long shown that, in many ways, we have been getting kinder and gentler. Cruelty as entertainment, human sacrifice to indulge superstition, slavery as a labor-saving device, conquest as the mission statement of government, genocide as a means of acquiring real estate, torture and mutilation as routine punishment, the death penalty for misdemeanors and differences of opinion, assassination as the mechanism of political succession, rape as the spoils of war, pogroms as outlets for frustration, homicide as the major form of conflict resolution—all were unexceptionable features of life for most of human history. But, today, they are rare to nonexistent in the West, far less common elsewhere than they used to be, concealed when they do occur, and widely condemned when they are brought to light. (…)

The decline of violence is a fractal phenomenon, visible at the scale of millennia, centuries, decades, and years. It applies over several orders of magnitude of violence, from genocide to war to rioting to homicide to the treatment of children and animals. And it appears to be a worldwide trend, though not a homogeneous one. The leading edge has been in Western societies, especially England and Holland, and there seems to have been a tipping point at the onset of the Age of Reason in the early seventeenth century.

At the widest-angle view, one can see a whopping difference across the millennia that separate us from our pre-state ancestors. Contra leftist anthropologists who celebrate the noble savage, quantitative body-counts—such as the proportion of prehistoric skeletons with axemarks and embedded arrowheads or the proportion of men in a contemporary foraging tribe who die at the hands of other men—suggest that pre-state societies were far more violent than our own.

It is true that raids and battles killed a tiny percentage of the numbers that die in modern warfare. But, in tribal violence, the clashes are more frequent, the percentage of men in the population who fight is greater, and the rates of death per battle are higher. According to anthropologists like Lawrence Keeley, Stephen LeBlanc, Phillip Walker, and Bruce Knauft, these factors combine to yield population-wide rates of death in tribal warfare that dwarf those of modern times. If the wars of the twentieth century had killed the same proportion of the population that die in the wars of a typical tribal society, there would have been two billion deaths, not 100 million.

Political correctness from the other end of the ideological spectrum has also distorted many people’s conception of violence in early civilizations—namely, those featured in the Bible. This supposed source of moral values contains many celebrations of genocide, in which the Hebrews, egged on by God, slaughter every last resident of an invaded city. The Bible also prescribes death by stoning as the penalty for a long list of nonviolent infractions, including idolatry, blasphemy, homosexuality, adultery, disrespecting one’s parents, and picking up sticks on the Sabbath. The Hebrews, of course, were no more murderous than other tribes; one also finds frequent boasts of torture and genocide in the early histories of the Hindus, Christians, Muslims, and Chinese.

At the century scale, it is hard to find quantitative studies of deaths in warfare spanning medieval and modern times. Several historians have suggested that there has been an increase in the number of recorded wars across the centuries to the present, but, as political scientist James Payne has noted, this may show only that “the Associated Press is a more comprehensive source of information about battles around the world than were sixteenth-century monks.” Social histories of the West provide evidence of numerous barbaric practices that became obsolete in the last five centuries, such as slavery, amputation, blinding, branding, flaying, disembowelment, burning at the stake, breaking on the wheel, and so on. Meanwhile, for another kind of violence—homicide—the data are abundant and striking.

The criminologist Manuel Eisner has assembled hundreds of homicide estimates from Western European localities that kept records at some point between 1200 and the mid-1990s. In every country he analyzed, murder rates declined steeply—for example, from 24 homicides per 100,000 Englishmen in the fourteenth century to 0.6 per 100,000 by the early 1960s.

On the scale of decades, comprehensive data again paint a shockingly happy picture: Global violence has fallen steadily since the middle of the twentieth century. According to the Human Security Brief 2006, the number of battle deaths in interstate wars has declined from more than 65,000 per year in the 1950s to less than 2,000 per year in this decade. In Western Europe and the Americas, the second half of the century saw a steep decline in the number of wars, military coups, and deadly ethnic riots.

Zooming in by a further power of ten exposes yet another reduction. After the cold war, every part of the world saw a steep drop-off in state-based conflicts, and those that do occur are more likely to end in negotiated settlements rather than being fought to the bitter end. Meanwhile, according to political scientist Barbara Harff, between 1989 and 2005 the number of campaigns of mass killing of civilians decreased by 90 percent.

The decline of killing and cruelty poses several challenges to our ability to make sense of the world. To begin with, how could so many people be so wrong about something so important? Partly, it’s because of a cognitive illusion: We estimate the probability of an event from how easy it is to recall examples. Scenes of carnage are more likely to be relayed to our living rooms and burned into our memories than footage of people dying of old age. Partly, it’s an intellectual culture that is loath to admit that there could be anything good about the institutions of civilization and Western society. Partly, it’s the incentive structure of the activism and opinion markets: No one ever attracted followers and donations by announcing that things keep getting better. And part of the explanation lies in the phenomenon itself. The decline of violent behavior has been paralleled by a decline in attitudes that tolerate or glorify violence, and often the attitudes are in the lead. As deplorable as they are, the abuses at Abu Ghraib and the lethal injections of a few murderers in Texas are mild by the standards of atrocities in human history. But, from a contemporary vantage point, we see them as signs of how low our behavior can sink, not of how high our standards have risen.

The other major challenge posed by the decline of violence is how to explain it. A force that pushes in the same direction across many epochs, continents, and scales of social organization mocks our standard tools of causal explanation. The usual suspects—guns, drugs, the press, American culture—aren’t nearly up to the job. Nor could it possibly be explained by evolution in the biologist’s sense: Even if the meek could inherit the earth, natural selection could not favor the genes for meekness quickly enough. In any case, human nature has not changed so much as to have lost its taste for violence. Social psychologists find that at least 80 percent of people have fantasized about killing someone they don’t like. And modern humans still take pleasure in viewing violence, if we are to judge by the popularity of murder mysteries, Shakespearean dramas, Mel Gibson movies, video games, and hockey.

What has changed, of course, is people’s willingness to act on these fantasies. The sociologist Norbert Elias suggested that European modernity accelerated a “civilizing process” marked by increases in self-control, long-term planning, and sensitivity to the thoughts and feelings of others. These are precisely the functions that today’s cognitive neuroscientists attribute to the prefrontal cortex. But this only raises the question of why humans have increasingly exercised that part of their brains. No one knows why our behavior has come under the control of the better angels of our nature, but there are four plausible suggestions.

The first is that Thomas Hobbes got it right. Life in a state of nature is nasty, brutish, and short, not because of a primal thirst for blood but because of the inescapable logic of anarchy. Any beings with a modicum of self-interest may be tempted to invade their neighbors to steal their resources. The resulting fear of attack will tempt the neighbors to strike first in preemptive self-defense, which will in turn tempt the first group to strike against them preemptively, and so on. This danger can be defused by a policy of deterrence—don’t strike first, retaliate if struck—but, to guarantee its credibility, parties must avenge all insults and settle all scores, leading to cycles of bloody vendetta. These tragedies can be averted by a state with a monopoly on violence, because it can inflict disinterested penalties that eliminate the incentives for aggression, thereby defusing anxieties about preemptive attack and obviating the need to maintain a hair-trigger propensity for retaliation. Indeed, Eisner and Elias attribute the decline in European homicide to the transition from knightly warrior societies to the centralized governments of early modernity. And, today, violence continues to fester in zones of anarchy, such as frontier regions, failed states, collapsed empires, and territories contested by mafias, gangs, and other dealers of contraband.

Payne suggests another possibility: that the critical variable in the indulgence of violence is an overarching sense that life is cheap. When pain and early death are everyday features of one’s own life, one feels fewer compunctions about inflicting them on others. As technology and economic efficiency lengthen and improve our lives, we place a higher value on life in general.

A third theory, championed by Robert Wright, invokes the logic of non-zero-sum games: scenarios in which two agents can each come out ahead if they cooperate, such as trading goods, dividing up labor, or sharing the peace dividend that comes from laying down their arms. As people acquire know-how that they can share cheaply with others and develop technologies that allow them to spread their goods and ideas over larger territories at lower cost, their incentive to cooperate steadily increases, because other people become more valuable alive than dead.

Then there is the scenario sketched by philosopher Peter Singer. Evolution, he suggests, bequeathed people a small kernel of empathy, which by default they apply only within a narrow circle of friends and relations. Over the millennia, people’s moral circles have expanded to encompass larger and larger polities: the clan, the tribe, the nation, both sexes, other races, and even animals. The circle may have been pushed outward by expanding networks of reciprocity, à la Wright, but it might also be inflated by the inexorable logic of the golden rule: The more one knows and thinks about other living things, the harder it is to privilege one’s own interests over theirs. The empathy escalator may also be powered by cosmopolitanism, in which journalism, memoir, and realistic fiction make the inner lives of other people, and the contingent nature of one’s own station, more palpable—the feeling that “there but for fortune go I”.

Whatever its causes, the decline of violence has profound implications. It is not a license for complacency: We enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to end it, and so we should work to end the appalling violence in our time. Nor is it necessarily grounds for optimism about the immediate future, since the world has never before had national leaders who combine pre-modern sensibilities with modern weapons.

But the phenomenon does force us to rethink our understanding of violence. Man’s inhumanity to man has long been a subject for moralization. With the knowledge that something has driven it dramatically down, we can also treat it as a matter of cause and effect. Instead of asking, “Why is there war?” we might ask, “Why is there peace?” From the likelihood that states will commit genocide to the way that people treat cats, we must have been doing something right. And it would be nice to know what, exactly, it is.”

Steven Pinker, Canadian-American experimental psychologist, cognitive scientist, linguist and popular science author, Harvard College Professor, A History Of Violence, Edge, March 27, 2007 (First published in The New Republic, 3.19.2007)

Steven Pinker on the myth of violence

Steven Pinker charts the decline of violence from Biblical times to the present, and argues that, though it may seem illogical and even obscene, given Iraq and Darfur, we are living in the most peaceful time in our species’ existence.

Steven Pinker on the myth of violence, TED.com, Mar 2007

See also:

☞ Steven Pinker, A History of Violence Edge Master Class 2011, Edge, Sept 27, 2011
The Psychology of Violence - a fascinating look at a violent act and a modern rethink of the psychology of shame and honour in preventing it
☞ David Runciman, The Better Angels of Our Nature by Steven Pinker - review, The Guardian, Sept 22, 2011
Violence tag on Lapidarium notes

Aug
5th
Fri
permalink

Denis Dutton: A Darwinian theory of beauty

"There are many differences among the arts, but there are also universal, cross-cultural aesthetic pleasures and values. How can we explain this universality? (…) The experience of beauty is one component in a whole series of Darwinian adaptations. (…)

It’s women who actually push history forward. Darwin himself, by the way, had no doubts that the peacock’s tail was beautiful in the eyes of the peahen. He actually used that word.  (…) We can say that the experience of beauty is one of the ways that evolution has of arousing and sustaining interest or fascination, even obsession, in order to encourage us toward making the most adaptive decisions for survival and reproduction. Beauty is nature’s way of acting at a distance, so to speak. I mean, you can’t expect to eat an adaptively beneficial landscape. It would hardly do to your baby or your lover. So evolution’s trick is to make them beautiful, to have them exert a kind of magnetism to give you the pleasure of simply looking at them.

Consider briefly an important source of aesthetic pleasure, the magnetic pull of beautiful landscapes. People in very different cultures all over the world tend to like a particular kind of landscape, a landscape that just happens to be similar to the pleistocene savannas where we evolved. (…)

It’s a kind of Hudson River school landscape featuring open spaces of low grasses interspersed with copses of trees. The trees, by the way, are often preferred if they fork near the ground, that is to say, if they’re trees you could scramble up if you were in a tight fix. The landscape shows the presence of water directly in view, or evidence of water in a bluish distance, indications of animal or bird life as well as diverse greenery and finally — get this — a path or a road, perhaps a riverbank or a shoreline, that extends into the distance, almost inviting you to follow it. This landscape type is regarded as beautiful, even by people in countries that don’t have it. The ideal savanna landscape is one of the clearest examples where human beings everywhere find beauty in similar visual experience.

The artistic beauty

But, someone might argue, that’s natural beauty. How about artistic beauty? Isn’t that exhaustively cultural? No, I don’t think it is. And once again, I’d like to look back to prehistory to say something about it. It is widely assumed that the earliest human artworks are the stupendously skillful cave paintings that we all know from Lascaux and Chauvet. Chauvet caves are about 32,000 years old, along with a few small, realistic sculptures of women and animals from the same period. But artistic and decorative skills are actually much older than that.

Beautiful shell necklaces that look like something you’d see at an arts and crafts fair, as well as ochre body paint, have been found from around 100,000 years ago. But the most intriguing prehistoric artifacts are older even than this. I have in mind the so-called Acheulian hand axes. The oldest stone tools are choppers from the Olduvai Gorge in East Africa. They go back about two and a half million years. These crude tools were around for thousands of centuries, until around 1.4 million years ago when Homo erectus started shaping single, thin stone blades, sometimes rounded ovals, but often in, what are to our eyes, an arresting, symmetrical pointed leaf or teardrop form.

These Acheulian hand axes — they’re named after St. Acheul in France, where finds were made in 19th century — have been unearthed in their thousands, scattered across Asia, Europe and Africa, almost everywhere Homo erectus and Homo ergaster roamed. Now, the sheer numbers of these hand axes shows that they can’t have been made for butchering animals. And the plot really thickens when you realize that, unlike other pleistocene tools, the hand axes often exhibit no evidence of wear on their delicate blade edges. And some, in any event, are too big to use for butchery. Their symmetry, their attractive materials and, above all, their meticulous workmanship are simply quite beautiful to our eyes, even today.

So what were these ancient — I mean, they’re ancient, they’re foreign, but they’re at the same time somehow familiar. What were these artifacts for? The best available answer is that they were literally the earliest known works of art, practical tools transformed into captivating aesthetic objects, contemplated both for their elegant shape and their virtuoso craftsmanship. Hand axes mark an evolutionary advance in human history — tools fashioned to function as what Darwinians call fitness signals — that is to say, displays that are performances like the peacock’s tail, except that, unlike hair and feathers, the hand axes are consciously cleverly crafted. Competently made hand axes indicated desirable personal qualities — intelligence, fine motor control, planning ability, conscientiousness and sometimes access to rare materials. Over tens of thousands of generations, such skills increased the status of those who displayed them and gained a reproductive advantage over the less capable. You know, it’s an old line, but it has been shown to work — “Why don’t you come up to my cave, so I can show you my hand axes.”

Except, of course, what’s interesting about this is that we can’t be sure how that idea was conveyed, because the Homo erectus that made these objects did not have language. It’s hard to grasp, but it’s an incredible fact. This object was made by a hominid ancestor — Homo erectus or Homo ergaster — between 50 and 100,000 years before language. Stretching over a million years, the hand axe tradition is the longest artistic tradition in human and proto-human history. By the end of the hand axe epic, Homo sapiens — as they were then called, finally — were doubtless finding new ways to amuse and amaze each other by, who knows, telling jokes, storytelling, dancing, or hairstyling.

Yes, hairstyling — I insist on that. For us moderns, virtuoso technique is used to create imaginary worlds in fiction and in movies, to express intense emotions with music, painting and dance. But still, one fundamental trait of the ancestral personality persists in our aesthetic cravings: the beauty we find in skilled performances. From Lascaux to the Louvre to Carnegie Hall, human beings have a permanent innate taste for virtuoso displays in the arts. We find beauty in something done well. So the next time you pass a jewelry shop window displaying a beautifully cut teardrop-shaped stone, don’t be so sure it’s just your culture telling you that that sparkling jewel is beautiful. Your distant ancestors loved that shape and found beauty in the skill needed to make it, even before they could put their love into words.

Is beauty in the eye of the beholder? No, it’s deep in our minds. It’s a gift, handed down from the intelligent skills and rich emotional lives of our most ancient ancestors. Our powerful reaction to images, to the expression of emotion in art, to the beauty of music, to the night sky, will be with us and our descendants for as long as the human race exists.”

Denis Dutton, academic, web entrepreneur. He was a professor of philosophy at the University of Canterbury in Christchurch, New Zealand, (1944-2010), Denis Dutton: A Darwinian theory of beauty, TED.com, Feb 2010 (transcript)

See also:

The Science of Art. A Neurological Theory of Aesthetic Experience
Beauty is in the medial orbitofrontal cortex of the beholder, study finds

permalink

Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’


                                          “Brain (Left)” and “Brain (Right)” ©Don Stewart

"Simply put, our brain is inherently well suited for some tasks, but ill suited for others. Unfortunately, the brain’s weaknesses include recognizing which tasks are which, so for the most part we remain ignorantly blissful of the extent to which our lives are governed by the brain’s bugs.”

"Like a parent that carefully filters the information her child is exposed to, the brain edits and censors much of the information it feeds to the conscious mind. In the same fashion that your brain likely edited out the extra "the" from the previous sentence, we are generally blissfully unaware of the arbitrary and irrational factors that govern our decisions and behaviors."

Dean Buonomano, Brain Bugs. How the brain’s flaws shape our lives, W.W. Norton, 2011

Memory errors

"One type of memory error that we make, a memory bug, is really a product of the fact that in human memory, there’s no distinct process or distinction between storage and retrieval.

So when a computer or a DVD writes something down, it has one laser that’s used to store the memory, and it has another laser to retrieve the memory, and those are very distinct processes.

Now, in human memory, the distinction between storage and retrieval is not very clear. (…)

This should be seen as a consequence of the fact that memory is written down as we experience it. It’s being continuously updated. And the synapses that undergo changes in strength - so as you alluded to earlier, one of the ways the brain writes does information is by making new synapses, making new connections or strengthening new ones or weakening old ones.

And that process uses these synapses that get strengthened, but the retrieval also uses those same synapses. So that can strengthen that pathway. (…)

The perception of time

When we think of the perception of time, most people think of the subjective sense of time: How long have they been listening to this program, how long are they stuck in traffic? And the brain seems to have multiple different mechanisms, and that’s one thing that we’ve learned about how the brain tells time is that unlike the clocks on our wrist that can be used to tell a few milliseconds or months and years, the brain has very fundamentally different mechanisms for telling very short periods of time and very long periods of time.

And that’s a consequence of the evolutionary process, that it came up with redundant solutions and different solutions depending upon the adaptive needs of different animals.

And it turns out that we don’t seem to have a very precise clock. Time is very much distorted when we are anticipating what’s about to happen, when we’re nervous, when we’re stressed and when we have high-adrenaline moments. Our internal clock is not that accurate. (…)

We are living in a time and place we didn’t evolve to live in

"And humans suffer some of the same consequences of living in a time and place we didn’t evolve to live in. (…) And by peering into the brain, we can learn a lot about whywe are good at some things and why we are not very good at others."

"The brain is an incomprehensibly complex biological computer, responsible for every action we have taken and every decision, thought, and feeling we’ve ever had. This is probably a concept that most people do not find comforting. Indeed, the fact that the mind emerges from the brain is something not all brains have come to accept. But our reticence to acknowledge that our humanity derives solely from the physical brain should not come as a surprise. The brain was not designed to understand itself anymore than a calculator was designed to surf the Web.Dean Buonomano, Brain Bugs. How the brain’s flaws shape our lives, W.W. Norton, 2011

Our neuro-operating system, if you will, the set of rules were endowed in our genes that provide instructions on how to build the brain, what it should come preloaded with, the innate biases we should have, and most animals have innate biases to fear predator with big sharp teeth and to fear poisonous spiders and poisonous snakes. Because those innate fears increase survival. And you don’t want to have to learn to fear snakes because you might not have a second chance. So we still carry that genetic baggage within our neuro-operating system.

It’s safe to say that it’s outdated. We currently live in a world in which in the United States, probably few people a year die or suffer severe consequences due to snake bites. But every year 44,000 people die of car accidents. So in the same way that evolution did not prepare say skunks to cope with the dangers of automobiles, evolution did not prepare humans to face the dangers of many of the things that surround us in our modern life, including automobiles, or an excess fluid for example of that we deal with problems due to obesity and too much cholesterol are all things that now have very dramatic effects on our lives, and we weren’t prepared for those things by the evolutionary process. (…)

A lots of our decisions are the product of two different systems interacting within the brain. And very loosely speaking, you can think of one of these as the automatic system, which is very unconscious and associative and emotional; and people can think of this as intuition. And then we have the reflective system, which is effortful, requires knowledge and careful deliberation. And people can get a quick feel for these two systems in operation with the following examples. The old trick question: what do cows drink? The part of your brain that just thought of milk was the automatic system. And then the reflective system comes in and says wait a minute. That’s wrong. The answer is water.

Similarly, if I asked: I’m going to throw four coins up in the air what’s the probability that two of them will be heads and two of them will be tail? Now, part of the brain that’s just thinking of well, it sounds like it should be 50 percent, because I said half the coins tails, half the coins heads. That’s again basically the automatic system. It would take the reflective system, some serious reflection, to work out the math and come up with an answer of six-sixteenths.

Now, in most cases we reach happy balances between both of these systems. And clearly, when we are understanding each other’s speech and making rapid decisions, the automatic system provides great balance as to what the proper answer is. But when we need to engage our reflective system and ask questions, such as the probability question that I just asked, sometimes we are misled because we trust the automatic system too much and the reflective system doesn’t really get through in some situations. And this can lead us, in the case for example, of the temporal discounting situation where I asked if you want $100 today or $120 in the future. So the automatic system, which is biased by immediate gratification, might get the edge in that situation.”

Dean Buonomano, professor in the departments of neurobiology and psychology at the University of California at Los Angeles and an investigator at UCLA’s Brain Research Institute, 'Brain Bugs': Cognitive Flaws That 'Shape Our Lives', NPR, July 14, 2011

See also:

A risk-perception: What You Don’t Know Can Kill You
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Iain McGilchrist on The Divided Brain and the Making of the Western World
Daniel Kahneman on the riddle of experience vs. memory
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Mind & Brain tag on Lapidarium

Jul
31st
Sun
permalink

In an era of global interconnectedness, what is the nature of cross-cultural exchange?

                       

"If you look at the world through a multicultural lens, you realize that that whole idea of exploration is a 19th century concept that has no meaning any more. I think that anthropology actually began in a beautiful way, which was that by studying another culture, albeit the exotic other, you could learn something about your common humanity and about humanity in general. Then it was very quickly co-opted by the ideology of its time and the anthropological lens was used to rationalize distinctions of class and race. Culture came to be seen as a set of frozen moments in time in some imagined evolutionary progression that of course inevitably placed Victorian Europe at the apex and sloped down to the so-called primitives of the world. That idea is now completely irrelevant, but that is not to say that there can be no explorations of spirit and of culture. Rather, all of life is an exploration of new paradigms of thought.

Q: So exploration becomes an intellectual rather than a physical phenomena?

One of the most exciting explorations of our time has come from the realm of genetics. We have literally proven to be true what the philosophers always hoped, which is that we are all connected, all brothers and sisters. Not in the spirit of some hippie cliché, but quite literally: we are cut from the same genetic cloth. We always say Americans are so culturally myopic. Actually, all peoples are. The names of many Indian tribes translate to “the people—” the implication being that everybody else is a savage. And that ethnocentric point of view was what almost all cultures celebrated throughout history. But if you now accept that all populations are descended from this handful of people that walked out of Africa 65,000 years ago, you have to also accept that they all fundamentally share the same raw intellectual capacity, the same genius. How that genius is expressed is simply a matter of choice and cultural orientation. Suddenly you see there is no progression of culture; there is a series of options. It’s not that other peoples are simply failed attempts at being us, or that they’ve missed the train of history by not being like us. No, they are unique answers to a fundamental question—what does it mean to be human? When we stop thinking of ourselves as the paragon of humanity’s potential, it also liberates us from this conceit that we’re on a train to disaster. You realize what we are is just one option, rooted in a relatively shallow past of just 300 years of industrial activity, and that these other peoples offer not some road map to where we should go but a suggestion that there are other ways of living. Those kind of intellectual revelations that are the outcome of intellectual exploration are every bit as valid as discovering a new continent. Exploring how we’re going to all live on this planet—that’s one we all need to be a part of.

Q: We now have instant access to images and voices from around the world. How has that changed the nature of cross-cultural encounters?

Whatever our notion of culture may have been when societies lived as isolates is long gone, and we’re moving towards a world where the issue isn’t modern versus traditional, but just the rights of free people to choose the components of their lives. How can we find a way that people can have access to the benefits of modernity, without that engagement demanding the death of their culture? For one thing, when people lose the conditions and roots of their traditions, it is simply geopolitically unstable. My objection to “the world is flat” theory is that it implies that the world we’re all melting down to is our world. And that’s just not true. It’s going to be a more interconnected world, and its going be a world that will be the consequence of all our own imaginings, but I would not want it to be, and it won’t be, just everybody melting down to being like us. (…)

Q: You’ve written about zombies, witch-doctors, and religion the world over. Where do you see magic in the contemporary world?

It’s not magic so much as metaphor. (…) In the Andes of Peru, a kid really does believe that a mountain is an acting spirit that will direct his destiny. That doesn’t mean he’s living in some la-la-land or fantasy; it means he has a solid sense of the earth being actually what we know it to be—a source of life, a source of food. The most important consequence is not whether the belief is true or not, but how it affects how people treat that mountain. If you think it’s divine you’re not going to blow it up.

Q: You’re advocating a kind of pragmatist environmentalist philosophy.

Exactly. People like to say indigenous people are closer to the earth, like some Rousseauvian ideal. That misses the whole point. I was raised to believe that the forests of British Columbia existed to be cut. That was the foundation of the ideology of what we called scientific forestry. Which was a total construct. It was a slogan; it wasn’t science. Yet we didn’t believe the earth had any resonance to us beyond board-feet cellulose. That is different from a kid from a tribe who had to go into those forests and confront animal spirits to bring back the wisdom to the potlatch. And again, it doesn’t matter whether that forest is the den of demons or just cellulose, that’s not the interesting question. It’s how the belief system affects the ecological footprint of people. So when I talk about magic, it’s really about metaphor. Metaphor is not air, not fluff. People always ask me, do I believe zombies are real? It’s like asking me do I believe Jesus Christ is real. Do I believe there was a guy who walked on water? I probably don’t. Do I believe that this phenomena of Jesus Christ influences behavior on a daily basis? Of course I do! I can’t tell you if there are zombies walking around Haiti but I can tell you that the idea of zombies influences behavior in Haiti. We cling to our rationality. We liberated ourselves from a certain trap and we’re reluctant to go anywhere back towards it. It’s one of the reasons we are all so confused. If you think of the pace of change that people are expected to absorb, and deal with it’s pretty daunting.”

Wade Davis, Canadian anthropologist, ethnobotanist, "If You Think Something Is Divine, You’re Not Going To Blow It Up", The European, 07.03.2011 (Illustration source)

See also:

Zygmunt Bauman: Europe’s task consists of passing on to all the art of everyone learning from everyone

Jul
29th
Fri
permalink

The evolution of generosity. The human impulse to be kind to unknown individuals is not the biological aberration it might seem

         

The extraordinary success of Homo sapiens is a result of four things: intelligence, language, an ability to manipulate objects dexterously in order to make tools, and co-operation. (…) Why are humans so willing to collaborate with unrelated strangers, even to the point of risking being cheated by people whose characters they cannot possibly know?

Evidence from economic games played in the laboratory for real money suggests humans are both trusting of those they have no reason to expect they will ever see again, and surprisingly unwilling to cheat them—and that these phenomena are deeply ingrained in the species’s psychology. (…)

Something extraordinary, such as a need for extreme collaboration prompted by the emergence of warfare that uses weapons, has happened in recent human evolution to promote the emergence of an instinct for unconditional generosity. (…)

It is possible to isolate features of interest and examine how they evolve in computer simulations. To this end Dr Delton and Dr Krasnow designed software agents that were able to meet up and interact in a computer’s processor.

The agents’ interactions mimicked those of economic games in the real world, though the currency was arbitrary “fitness units” rather than dollars. This meant that agents which successfully collaborated built up fitness over the period of their collaboration. Those that cheated on the first encounter got a one-off allocation of fitness, but would never be trusted in the future. Each agent had an inbuilt and heritable level of trustworthiness (ie, the likelihood that it would cheat at the first opportunity) and, in each encounter it had, it was assigned a level of likelihood (detectable by the other agent) that it would be back for further interactions.

After a certain amount of time the agents reproduced in proportion to their accumulated fitness; the old generation died, and the young took over. The process was then repeated for 10,000 generations (equivalent to about 200,000 years of human history, or the entire period for which Homo sapiens has existed), to see what level of collaboration would emerge.

The upshot was that, as the researchers predicted, generosity pays—or, rather, the cost of early selfishness is greater than the cost of trust. This is because the likelihood that an encounter will be one-off, and thus worth cheating on, is just that: a likelihood, rather than a certainty. This fact was reflected in the way the likelihood values were created in the model. They were drawn from a probability distribution, so the actual future encounter rate was only indicated, not precisely determined by them.

For most plausible sets of costs, benefits and chances of future encounters the simulation found that it pays to be trusting, even though you will sometimes be cheated. Which, if you think about it, makes perfect sense. Previous attempts to study the evolution of trust using games have been arranged to make it clear to the participants whether their encounter was a one-off, and drawn their conclusions accordingly. That, though, is hardly realistic. In the real world, although you might guess, based on the circumstances, whether or not you will meet someone again, you cannot know for sure. Moreover, in the ancient world of hunter-gatherers, limited movement meant a second encounter would be much more likely than it is in the populous, modern urban world.

No need, then, for special mechanisms to explain generosity. An open hand to the stranger makes evolutionary as well as moral sense. Except, of course, that those two senses are probably, biologically speaking, the same thing.”

The evolution of generosity: Welcome, stranger, The Economist, Jul 30th 2011

Jul
5th
Tue
permalink

Tendency Toward Egalitarianism May Have Helped Humans Survive

                  
"Darwinian-minded analysts argue that Homo sapiens have an innate distaste for hierarchical extremes, the legacy of our long nomadic prehistory as tightly knit bands living by veldt-ready team-building rules: the belief in fairness and reciprocity, a capacity for empathy and impulse control, and a willingness to work cooperatively in ways that even our smartest primate kin cannot match. As Michael Tomasello of the Max Planck Institute for Evolutionary Anthropology has pointed out, you will never see two chimpanzees carrying a log together. The advent of agriculture and settled life may have thrown a few feudal monkeys and monarchs into the mix, but evolutionary theorists say our basic egalitarian leanings remain.

Studies have found that the thirst for fairness runs deep. As Ernst Fehr of the University of Zurich and his colleagues reported in the journal Nature, by the age of 6 or 7, children are zealously devoted to the equitable partitioning of goods, and they will choose to punish those who try to grab more than their arithmetically proper share of Smarties and jelly beans even when that means the punishers must sacrifice their own portion of treats.

In follow-up research with older children and adolescents that has yet to be published, Dr. Fehr and his colleagues have found a more nuanced understanding of fairness, an acknowledgment that some degree of inequality can make sense: The kid who studies every night deserves a better grade than the slacker. Nevertheless, said Dr. Fehr, there are limits to teenage tolerance. “ ‘One for me, two for you’ may not be too bad,” Dr. Fehr said. “But ‘one for me, five for you’ would not be accepted.”

A sense of fairness is both cerebral and visceral, cortical and limbic. In the journal PLoS Biology, Katarina Gospic of the Karolinska Institute’s Osher Center in Stockholm and her colleagues analyzed brain scans of 35 subjects as they played the famed Ultimatum game, in which participants bargain over how to divide up a fixed sum of money. Immediately upon hearing an opponent propose a split of 80 percent me, 20 percent you, scanned subjects showed a burst of activity in the amygdala, the ancient seat of outrage and aggression, followed by the arousal of higher cortical domains associated with introspection, conflict resolution and upholding rules; and 40 percent of the time they angrily rejected the deal as unfair.

That first swift limbic kick proved key. When given a mild anti-anxiety drug that suppressed the amygdala response, subjects still said they viewed an 80-20 split as unjust, but their willingness to reject it outright dropped in half. “This indicates that the act of treating people fairly and implementing justice in society has evolutionary roots,“ Dr. Gospic said. “It increases our survival.”

David Sloan Wilson, an evolutionary theorist at the State University of New York at Binghamton, sees the onset of humanity’s cooperative, fair-and-square spirit as one of the major transitions in the history of life on earth, moments when individual organisms or selection units band together and stake their future fitness on each other. A larger bacterial cell engulfs a smaller bacterial cell to form the first complex eukaryotic cell. Single cells merge into multicellular organisms of specialized parts. Ants and bees become hive-minded superorganisms and push all other insects aside.

“A major transition occurs when you have mechanisms for suppressing fitness differences and establishing equality within groups, so that it is no longer possible to succeed at the expense of your group,” Dr. Wilson said. “It’s a rare event, and it’s hard to get started, but when it does you can quickly dominate the earth.” Human evolution, he said, “clearly falls into this paradigm.”

Our rise to global dominance began, paradoxically enough, when we set rigid dominance hierarchies aside. “In a typical primate group, the toughest individuals can have their way and dominate everybody else in the group,” said Dr. Wilson. “Chimps are very smart, but their intelligence is predicated on distrust.”

Our ancestors had to learn to trust their neighbors, and the seeds of our mutuality can be seen in our simplest gestures, like the willingness to point out a hidden object to another, as even toddlers will do. Early humans also needed ways to control would-be bullies, and our exceptional pitching skills — which researchers speculate originally arose to help us ward off predators — probably helped. “We can throw much better than any other primate,” Dr. Wilson said, “and once we could throw things at a distance, all of a sudden the alpha male is vulnerable to being dispatched with stones. Stoning might have been one of our first adaptations.”

Low hierarchy does not mean no hierarchy. Through ethnographic and cross-cultural studies, researchers have concluded that the basic template for human social groups is moderately but not unerringly egalitarian. They have found gradients of wealth and power among even the most nomadic groups, but such gradients tend to be mild. In a recent analysis of five hunter-gatherer populations, Eric Aiden Smith of the University of Washington and his colleagues found the average degree of income equality to be roughly half that seen in the United States, and close to the wealth distribution of Denmark.

Interestingly, another recent study found that when Americans were given the chance to construct their version of the optimal wealth gradient for America, both Republicans and Democrats came up with a chart that looked like Sweden’s. There’s no need to insult the meat in the land of lutefisk.”

— Natalie Angier, science journalist, Thirst for Fairness May Have Helped Us Survive, The New York Times, July 4, 2011 (Illustration source)