Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Nov
1st
Fri
permalink

Bill Gates: ‘If you think connectivity is the key thing, that’s great. I don’t. The world is not flat and PCs are not, in the hierarchy of human needs’


The internet is not going to save the world, whatever Mark Zuckerberg and Silicon Valley’s tech billionaires believe. (…) But eradicating disease just might.

Bill Gates describes himself as a technocrat. But he does not believe that technology will save the world. Or, to be more precise, he does not believe it can solve a tangle of entrenched and interrelated problems that afflict humanity’s most vulnerable: the spread of diseases in the developing world and the poverty, lack of opportunity and despair they engender. “I certainly love the IT thing,” he says. “But when we want to improve lives, you’ve got to deal with more basic things like child survival, child nutrition.

These days, it seems that every West Coast billionaire has a vision for how technology can make the world a better place. A central part of this new consensus is that the internet is an inevitable force for social and economic improvement; that connectivity is a social good in itself. It was a view that recently led Mark Zuckerberg to outline a plan for getting the world’s unconnected 5 billion people online, an effort the Facebook boss called “one of the greatest challenges of our generation”. But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.” (…)

Gates says now. “The world is not flat and PCs are not, in the hierarchy of human needs, in the first five rungs.” (…)

To Diamandis’s argument that there is more good to be done in the world by building new industries than by giving away money, meanwhile, he has a brisk retort: “Industries are only valuable to the degree they meet human needs. There’s not some – at least in my psyche – this notion of, oh, we need new industries. We need children not to die, we need people to have an opportunity to get a good education.” (…)

Gates describes himself as a natural optimist. But he admits that the fight with the US government seriously challenged his belief that the best outcome would always prevail. With a typically generalising sweep across history, he declares that governments have “worked pretty well on balance in playing their role to improve the human condition” and that in the US since 1776, “the government’s played an absolutely central role and something wonderful has happened”. But that doesn’t settle his unease.

“The closer you get to it and see how the sausage is made, the more you go, oh my God! These guys don’t even actually know the budget. It makes you think: can complex, technocratically deep things – like running a healthcare system properly in the US in terms of impact and cost – can that get done? It hangs in the balance.”

It isn’t just governments that may be unequal to the task. On this analysis, the democratic process in most countries is also straining to cope with the problems thrown up by the modern world, placing responsibilities on voters that they can hardly be expected to fulfil. “The idea that all these people are going to vote and have an opinion about subjects that are increasingly complex – where what seems, you might think … the easy answer [is] not the real answer. It’s a very interesting problem. Do democracies faced with these current problems do these things well?.”

An exclusive interview with Bill Gates, The Financial Times, Nov 1, 2013, Photo

Aug
10th
Fri
permalink

God and the Ivory Tower. What we don’t understand about religion just might kill us

                   
(Illustration: Medieval miniature painting of the Siege of Antioch (1490). The Crusades were a series of a military campaigns fought mainly between Christian Europe and Muslims. Shown here is a battle scene from the First Crusade.)

"The era of world struggle between the great secular ideological -isms that began with the French Revolution and lasted through the Cold War (republicanism, anarchism, socialism, fascism, communism, liberalism) is passing on to a religious stage. Across the Middle East and North Africa, religious movements are gaining social and political ground, with election victories by avowedly Islamic parties in Turkey, Palestine, Egypt, Tunisia, and Morocco. As Israel’s National Security Council chief, Gen. Yaakov Amidror (a religious man himself), told me on the eve of Tunisia’s elections last October, “We expect Islamist parties to soon dominate all governments in the region, from Afghanistan to Morocco, except for Israel.”

On a global scale, Protestant evangelical churches (together with Pentacostalists) continue to proliferate, especially in Latin America, but also keep pace with the expansion of fundamentalist Islam in southern Africa and eastern and southern Asia. In Russia, a clear majority of the population remains religious despite decades of forcibly imposed atheism. Even in China, where the government’s commission on atheism has the Sisyphean job of making that country religion-free, religious agitation is on the rise. And in the United States, a majority says it wants less religion in politics, but an equal majority still will not vote for an atheist as president.

But if reams of social scientific analysis have been produced on religion’s less celestial cousins — from the nature of perception and speech to how we rationalize and shop — faith is not a matter that rigorous science has taken seriously. To be sure, social scientists have long studied how religious practices correlate with a wide range of economic, social, and political issues. Yet, for nearly a century after Harvard University psychologist William James’s 1902 masterwork, The Varieties of Religious Experience, there was little serious investigation of the psychological structure or neurological and biological underpinnings of religious belief that determine how religion actually causes behavior. And that’s a problem if science aims to produce knowledge that improves the human condition, including a lessening of cultural conflict and war.

Religion molds a nation in which it thrives, sometimes producing solidarity and sacred causes so powerful that citizens are willing to kill or die for a common good (as when Judea’s Jews around the time of Christ persisted in rebellion unto political annihilation in the face of the Roman Empire’s overwhelmingly military might). But religion can also hinder a society’s ability to work out differences with others, especially if those others don’t understand what religion is all about. That’s the mess we find ourselves in today, not only among different groups of Americans in the so-called culture wars, but between secular and Judeo-Christian America and many Muslim countries.

Time and again, countries go to war without understanding the transcendent drives and dreams of adversaries who see a very different world. Yet we needn’t fly blindly into the storm.

Science can help us understand religion and the sacred just as it can help us understand the genome or the structure of the universe. This, in turn, can make policy better informed.

Fortunately, the last few years show progress in scientific studies of religion and the sacred, though headwinds remain strong. Across history and cultures, religion has often knit communities together under the rule of sentient, but immaterial deities — that is, spiritual beings whose description is logically contradictory and empirically unfalsifiable. Cross-cultural studies pioneered by anthropologist Pascal Boyer show that these miraculous features — talking bushes, horses that leap into the sky — make lasting impressions on people and thereby increase the likelihood that they will be passed down to the next generation. Implausibility also facilitates cultural transmission in a more subtle manner — fostering adaptability of religious beliefs by opening the door to multiple interpretations (as with metaphors or weekly sermons).

And the greater the investment in outlandishness, the better. This is because adherence to apparently absurd beliefs means incurring costs — surviving without electricity, for example, if you are Amish — which help identify members who are committed to the survival of a group and cannot be lured away. The ease of identifying true believers, in turn, builds trust and galvanizes group solidarity for common defense.

To test this hypothesis, anthropologist Richard Sosis and his colleagues studied 200 communes founded in the United States in the 19th century. If shared religious beliefs really did foster loyalty, they reasoned, then communes formed out of religious conviction should survive longer than those motivated by secular ideologies such as socialism. Their findings were striking: Just 6 percent of the secular communes were still functioning 20 years after their founding, compared with 39 percent of the religious communes.

It is not difficult to see why groups formed for purely rational reasons can be more vulnerable to collapse: Background conditions change, and it might make sense to abandon one group in favor of another. Interestingly, recent research echoes the findings of 14th-century historian Ibn Khaldun, who argued that long-term differences among North African Muslim dynasties with comparable military might “have their origin in religion … [and] group feeling [wherein] mutual cooperation and support flourish.” The more religious societies, he argued, endured the longest.

For this reason, even ostensibly secular countries and transnational movements usually contain important quasi-religious rituals and beliefs. Think of sacred songs and ceremonies, or postulations that “providence” or “nature” bestows equality and inalienable rights (though, for about 99.9 percent of our species’ existence, slavery, and oppression of minorities were more standard fare). These sacred values act as moral imperatives that inspire nonrational sacrifices in cooperative endeavors such as war.

Insurgents, revolutionaries, and terrorists all make use of this logic, generating outsized commitment that allows them to resist and often prevail against materially stronger foes. Consider the American revolutionaries who defied the greatest empire of their age by pledging “our Lives, our Fortunes and our sacred Honor” for the cause of “liberty or death.” Surely they were aware of how unlikely they were to succeed, given the vast disparities in material resources, manpower, and training. As Osama Hamdan, the ranking Hamas politburo member for external affairs, put it to me in Damascus, Syria, “George Washington was fighting the strongest military in the world, beyond all reason. That’s what we’re doing. Exactly.”

But the same logic that makes religious and sacred beliefs more likely to endure can make them impervious to compromise. Based on interviews, experiments, and surveys with Palestinians, Israelis, Indonesians, Indians, Afghans, and Iranians, my research with psychologists Jeremy Ginges, Douglas Medin, and others demonstrates that offering people material incentives (large amounts of money, guarantees for a life free of political violence) to compromise sacred values can backfire, increasing stated willingness to use violence. Such backfire effects occur both for convictions with clear religious investment (Jerusalem, sharia law) and for those that are at least initially nonreligious (Iran’s right to a nuclear capability, Palestinian refugees’ right of return).

According to a 2010 study, for example, most Iranians think there is nothing sacred about their government’s nuclear program. But for a sizable minority — 13 percent of the population — the quest for a nuclear capability (more focused on energy than weapons) had, through religious rhetoric, become a sacred subject. This group, which tends to be close to the regime, now believes a nuclear program is bound up with national identity and with Islam itself. As a result, offering material rewards or punishments to abandon the program only increases anger and support for it.

Although this sacralization of initially secular issues confounds standard “business-like” negotiation tactics, my work with political scientist Robert Axelrod interviewing political leaders in the Middle East and elsewhere indicates that strong symbolic gestures (sincere apologies, demonstrating respect for the other’s values) generate surprising flexibility, even among militants, and may enable subsequent material negotiations. Thus, we find that Palestinian leaders and their supporting populations are generally willing to accept Israeli offers of economic improvement only after issues of recognition are addressed. Even purely symbolic statements accompanied by no material action, such as “we recognize your suffering” or “we respect your rights in Jerusalem,” diminish support for violence, including suicide terrorism. This is particularly promising because symbolic gestures tied to religious notions that are open to interpretation might potentially be reframed without compromising their absolute “truth.” For example, Jerusalem might be reconceived less as a place than portal to heaven, where earthly access to the portal suffices.

If these things are worth knowing, why do scientists still shun religion?

Part of the reason is that most scientists are staunchly nonreligious. If you look at the prestigious U.S. National Academy of Sciences or Britain’s Royal Society, well over 90 percent of members are non-religious. That may help explain why some of the bestselling books by scientists about religion aren’t about the science of religion as much as the reasons that it’s no longer necessary to believe. “New Atheists” have aggressively sought to discredit religion as the chief cause of much human misery, militating for its demise. They contend that science has now answered questions about humans’ origins and place in the world that only religion sought to answer in the days before evolutionary science, and that humankind no longer needs the broken crutch of faith.

But the idea that we can simply argue away religion has little factual support. Although a recent study by psychologists Will Gervais and Ara Norenzayan indicates that people are less prone to think religiously when they think analytically, other studies suggest that seemingly contrary evidence rarely undermines religious belief, especially among groups welded by ritualized sacrifice in the face of outside threats. Norenzayan and others also find that belief in gods and miracles intensifies when people are primed with awareness of death or when facing danger, as in wartime.

Moreover, the chief complaint against religion — that it is history’s prime instigator of intergroup conflict — does not withstand scrutiny. Religious issues motivate only a small minority of recorded wars. The Encyclopedia of Wars surveyed 1,763 violent conflicts across history; only 123 (7 percent) were religious. A BBC-sponsored “God and War" audit, which evaluated major conflicts over 3,500 years and rated them on a 0-to-5 scale for religious motivation (Punic Wars = 0, Crusades = 5), found that more than 60 percent had no religious motivation. Less than 7 percent earned a rating greater than 3. There was little religious motivation for the internecine Russian and Chinese conflicts or the world wars responsible for history’s most lethal century of international bloodshed.

Indeed, inclusive concepts such as “humanity” arguably emerged with the rise of universal religions. Sociologist Rodney Stark reveals that early Christianity became the Roman Empire’s majority religion not through conquest, but through a social process grounded in trust. Repeated acts of altruism, such as caring for non-Christians during epidemics, facilitated the expansion of social networks that were invested in the religion. Likewise, studies by behavioral economist Joseph Henrich and colleagues on contemporary foragers, farmers, and herders show that professing a world religion is correlated with greater fairness toward passing strangers. This research helps explain what’s going on in sub-Saharan Africa, where Islam is spreading rapidly. In Rwanda, for example, people began converting to Islam in droves after Muslims systematically risked their lives to protect Christians and animists from genocide when few others cared.

Although surprisingly few wars are started by religions, once they start, religion — and the values it imposes — can play a critical role. When competing interests are framed in terms of religious and sacred values, conflict may persist for decades, even centuries. Disputes over otherwise mundane phenomena then become existential struggles, as when land becomes “Holy Land.” Secular issues become sacralized and nonnegotiable, regardless of material rewards or punishments. In a multiyear study, our research group found that Palestinian adolescents who perceived strong threats to their communities and were highly involved in religious ritual were most likely to see political issues, like the right of refugees to return to homes in Israel, as absolute moral imperatives. These individuals were thus opposed to compromise, regardless of the costs. It turns out there may be a neurological component to such behavior: Our work with Gregory Berns and his neuroeconomics team suggests that such values are processed in the brain as duties rather than utilitarian calculations; neuroimaging reveals that violations of sacred values trigger emotional responses consistent with sentiments of moral outrage.

Historical and experimental studies suggest that the more antagonistic a group’s neighborhood, the more tightly that group will cling to its sacred values and rituals. The result is enhanced solidarity, but also increased potential for conflict toward other groups. Investigation of 60 small-scale societies reveals that groups that experience the highest rates of conflict (warfare) endure the costliest rites (genital mutilation, scarification, etc.). Likewise, research in India, Mexico, Britain, Russia, and Indonesia indicates that greater participation in religious ritual in large-scale societies is associated with greater parochial altruism — that is, willingness to sacrifice for one’s own group, such as Muslims or Christians, but not for outsiders — and, in relevant contexts, support for suicide attacks. This dynamic is behind the paradoxical reality that the world finds itself in today: Modern global multiculturalism is increasingly challenged by fundamentalist movements aimed at reviving group loyalty through greater ritual commitments to ideological purity.

So why does it matter that we have moved past the -isms and into an era of greater religiosity? In an age where religious and sacred causes are resurgent, there is urgent need for scientific effort to understand them. Now that humankind has acquired through science the power to destroy itself with nuclear weapons, we cannot afford to let science ignore religion and the sacred, or let scientists simply try to reason them away. Policymakers should leverage scientific understanding of what makes religion so potent a force for both cooperation and conflict, to help increase the one and lessen the other.

Scott Atran, American and French anthropologist at France’s National Center for Scientific Research, the University of Michigan, John Jay College, and ARTIS Research who has studied violence and interviewed terrorists, God and the Ivory Tower, Foreign Policy, Aug 6, 2012.

See also:

Scott Atran on Why War Is Never Really Rational
‘We’ vs ‘Others’: Russell Jacoby on why we should fear our neighbors more than strangers
The Psychology of Violence (a modern rethink of the psychology of shame and honour in preventing it), Lapidarium notes
Religion tag on Lapidarium notes

Apr
15th
Sun
permalink

How liberal and conservative brains are wired differently. Liberals and conservatives don’t just vote differently, they think differently

           

"There’s now a large body of evidence showing that those who opt for the political left and those who opt for the political right tend to process information in divergent ways and to differ on any number of psychological traits.

Perhaps most important, liberals consistently score higher on a personality measure called “openness to experience,” one of the “Big Five” personality traits, which are easily assessed through standard questionnaires. That means liberals tend to be the kind of people who want to try new things, including new music, books, restaurants and vacation spots — and new ideas.

“Open people everywhere tend to have more liberal values,” said psychologist Robert McCrae, who conducted voluminous studies on personality while at the National Institute on Aging at the National Institutes of Health.

Conservatives, in contrast, tend to be less open — less exploratory, less in need of change — and more “conscientious,” a trait that indicates they appreciate order and structure in their lives. This gels nicely with the standard definition of conservatism as resistance to change — in the famous words of William F. Buckley Jr., a desire to stand “athwart history, yelling ‘Stop!’ ” (…)

We see the consequences of liberal openness and conservative conscientiousness everywhere — and especially in the political battle over facts. (…)

Compare this with a different irrationality: refusing to admit that humans are a product of evolution, a chief point of denial for the religious right. In a recent poll, just 43 percent of tea party adherents accepted the established science here. Yet unlike the vaccine issue, this denial is anything but new and trendy; it is well over 100 years old. The state of Tennessee is even hearkening back to the days of the Scopes “Monkey” Trial, more than 85 years ago. It just passed a bill that will weaken the teaching of evolution.

Such are some of the probable consequences of openness, or the lack thereof. (…)

Now consider another related trait implicated in our divide over reality: the “need for cognitive closure.” This describes discomfort with uncertainty and a desire to resolve it into a firm belief. Someone with a high need for closure tends to seize on a piece of information that dispels doubt or ambiguity, and then freeze, refusing to consider new information. Those who have this trait can also be expected to spend less time processing information than those who are driven by different motivations, such as achieving accuracy.

A number of studies show that conservatives tend to have a greater need for closure than do liberals, which is precisely what you would expect in light of the strong relationship between liberalism and openness. “The finding is very robust,” explained Arie Kruglanski, a University of Maryland psychologist who has pioneered research in this area and worked to develop a scale for measuring the need for closure.

The trait is assessed based on responses to survey statements such as “I dislike questions which could be answered in many different ways” and “In most social conflicts, I can easily see which side is right and which is wrong.” (…)

Anti-evolutionists have been found to score higher on the need for closure. And in the global-warming debate, tea party followers not only strongly deny the science but also tend to say that they “do not need any more information” about the issue.

I’m not saying that liberals have a monopoly on truth. Of course not. They aren’t always right; but when they’re wrong, they are wrong differently.

When you combine key psychological traits with divergent streams of information from the left and the right, you get a world where there is no truth that we all agree upon. We wield different facts, and hold them close, because we truly experience things differently. (…)”

Chris Mooney, science and political journalist, author of four books, including the New York Times bestselling The Republican War on Science and the forthcoming The Republican Brain: The Science of Why They Deny Science and Reality (April 2012), Liberals and conservatives don’t just vote differently. They think differently, The Washington Post, April 13, 2012. (Illustration: Koren Shadmi for The Washington Post)

See also:

Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
Cognitive and Social Consequences of the Need for Cognitive Closure, European Review of Social Psychology
☞ Antonio Chirumbolo, The relationship between need for cognitiveclosure and political orientation: the mediating role of authoritarianism, Department of Social and Developmental Psychology, University of Rome ‘La Sapienza’
Paul Nurse, Stamp out anti-science in US politics, New Scientist, 14 Sept 2011
☞ Chris Mooney, Why Republicans Deny Science: The Quest for a Scientific Explanation, The Huffington Post, Jan 11, 2012
☞ John Allen Paulos, Why Don’t Americans Elect Scientists?, NYTimes, Feb 13, 2012.
Study: Conservatives’ Trust in Science Has Fallen Dramatically Since Mid-1970s, American Sociological Association, March 29, 2012.
Why people believe in strange things, Lapidarium notes

Jan
13th
Fri
permalink

A risk-perception: What You Don’t Know Can Kill You

"Humans have a perplexing 
tendency to fear rare threats such as shark attacks while blithely 
ignoring far greater risks like 
unsafe sex and an unhealthy diet. Those illusions are not just 
silly—they make the world a more dangerous place. (…)

We like to think that humans are supremely logical, making decisions on the basis of hard data and not on whim. For a good part of the 19th and 20th centuries, economists and social scientists assumed this was true too. The public, they believed, would make rational decisions if only it had the right pie chart or statistical table. But in the late 1960s and early 1970s, that vision of homo economicus—a person who acts in his or her best interest when given accurate information—was knee­capped by researchers investigating the emerging field of risk perception. What they found, and what they have continued teasing out since the early 1970s, is that humans have a hell of a time accurately gauging risk. Not only do we have two different systems—logic and instinct, or the head and the gut—that sometimes give us conflicting advice, but we are also at the mercy of deep-seated emotional associations and mental shortcuts. (…)

Our hardwired gut reactions developed in a world full of hungry beasts and warring clans, where they served important functions. Letting the amygdala (part of the brain’s emotional core) take over at the first sign of danger, milliseconds before the neocortex (the thinking part of the brain) was aware a spear was headed for our chest, was probably a very useful adaptation. Even today those nano-pauses and gut responses save us from getting flattened by buses or dropping a brick on our toes. But in a world where risks are presented in parts-per-billion statistics or as clicks on a Geiger counter, our amygdala is out of its depth.

A risk-perception apparatus permanently tuned for avoiding mountain lions makes it unlikely that we will ever run screaming from a plate of fatty mac ’n’ cheese. “People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level,” says Carnegie Mellon University researcher George Loewenstein, whose seminal 2001 paper, “Risk as Feelings,” debunked theories that decision making in the face of risk or uncertainty relies largely on reason. “Types of stimuli that people are evolutionarily prepared to fear, such as caged spiders, snakes, or heights, evoke a visceral response even when, at a cognitive level, they are recognized to be harmless,” he says. Even Charles Darwin failed to break the amygdala’s iron grip on risk perception. As an experiment, he placed his face up against the puff adder enclosure at the London Zoo and tried to keep himself from flinching when the snake struck the plate glass. He failed.

The result is that we focus on the one-in-a-million bogeyman while virtually ignoring the true risks that inhabit our world. News coverage of a shark attack can clear beaches all over the country, even though sharks kill a grand total of about one American annually, on average. That is less than the death count from cattle, which gore or stomp 20 Americans per year. Drowning, on the other hand, takes 3,400 lives a year, without a single frenzied call for mandatory life vests to stop the carnage. A whole industry has boomed around conquering the fear of flying, but while we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities each year.

In short, our risk perception is often at direct odds with reality. All those people bidding up the cost of iodide? They would have been better off spending $10 on a radon testing kit. The colorless, odorless, radioactive gas, which forms as a by-product of natural uranium decay in rocks, builds up in homes, causing lung cancer. According to the Environmental Protection Agency, radon exposure kills 21,000 Americans annually.

David Ropeik, a consultant in risk communication and the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, has dubbed this disconnect the perception gap. “Even perfect information perfectly provided that addresses people’s concerns will not convince everyone that vaccines don’t cause autism, or that global warming is real, or that fluoride in the drinking water is not a Commie plot,” he says. “Risk communication can’t totally close the perception gap, the difference between our fears and the facts.”

In the early 1970s, psychologists Daniel Kahneman, now at Princeton University, and Amos Tversky, who passed away in 1996, began investigating the way people make decisions, identifying a number of biases and mental shortcuts, or heuristics, on which the brain relies to make choices. Later, Paul Slovic and his colleagues Baruch Fischhoff, now a professor of social sciences at Carnegie Mellon University, and psychologist Sarah Lichtenstein began investigating how these leaps of logic come into play when people face risk. They developed a tool, called the psychometric paradigm, that describes all the little tricks our brain uses when staring down a bear or deciding to finish the 18th hole in a lighting storm.

Many of our personal biases are unsurprising. For instance, the optimism bias gives us a rosier view of the future than current facts might suggest. We assume we will be richer 10 years from now, so it is fine to blow our savings on a boat—we’ll pay it off then. Confirmation bias leads us to prefer information that backs up our current opinions and feelings and to discount information contradictory to those opinions. We also have tendencies to conform our opinions to those of the groups we identify with, to fear man-made risks more than we fear natural ones, and to believe that events causing dread—the technical term for risks that could result in particularly painful or gruesome deaths, like plane crashes and radiation burns—are inherently more risky than other events.

But it is heuristics—the subtle mental strategies that often give rise to such biases—that do much of the heavy lifting in risk perception. The “availability” heuristic says that the easier a scenario is to conjure, the more common it must be. It is easy to imagine a tornado ripping through a house; that is a scene we see every spring on the news, and all the time on reality TV and in movies. Now try imagining someone dying of heart disease. You probably cannot conjure many breaking-news images for that one, and the drawn-out process of athero­sclerosis will most likely never be the subject of a summer thriller. The effect? Twisters feel like an immediate threat, although we have only a 1-in-46,000 chance of being killed by a cataclysmic storm. Even a terrible tornado season like the one last spring typically yields fewer than 500 tornado fatalities. Heart disease, on the other hand, which eventually kills 1 in every 6 people in this country, and 800,000 annually, hardly even rates with our gut. (…)

All the mental rules of thumb and biases banging around in our brain, the most influential in assessing risk is the “affect” heuristic. Slovic calls affect a “faint whisper of emotion” that creeps into our decisions. Simply put, positive feelings associated with a choice tend to make us think it has more benefits. Negative correlations make us think an action is riskier. One study by Slovic showed that when people decide to start smoking despite years of exposure to antismoking campaigns, they hardly ever think about the risks. Instead, it’s all about the short-term “hedonic” pleasure. The good outweighs the bad, which they never fully expect to experience.

Our fixation on illusory threats at the expense of real ones influences more than just our personal lifestyle choices. Public policy and mass action are also at stake. The Office of National Drug Control Policy reports that prescription drug overdoses have killed more people than crack and heroin combined did in the 1970s and 1980s. Law enforcement and the media were obsessed with crack, yet it was only recently that prescription drug abuse merited even an after-school special.

Despite the many obviously irrational ways we behave, social scientists have only just begun to systematically document and understand this central aspect of our nature. In the 1960s and 1970s, many still clung to the homo economicus model. They argued that releasing detailed information about nuclear power and pesticides would convince the public that these industries were safe. But the information drop was an epic backfire and helped spawn opposition groups that exist to this day. Part of the resistance stemmed from a reasonable mistrust of industry spin. Horrific incidents like those at Love Canal and Three Mile Island did not help. Yet one of the biggest obstacles was that industry tried to frame risk purely in terms of data, without addressing the fear that is an instinctual reaction to their technologies.

The strategy persists even today. In the aftermath of Japan’s nuclear crisis, many nuclear-energy boosters were quick to cite a study commissioned by the Boston-based nonprofit Clean Air Task Force. The study showed that pollution from coal plants is responsible for 13,000 premature deaths and 20,000 heart attacks in the United States each year, while nuclear power has never been implicated in a single death in this country. True as that may be, numbers alone cannot explain away the cold dread caused by the specter of radiation. Just think of all those alarming images of workers clad in radiation suits waving Geiger counters over the anxious citizens 
of Japan. Seaweed, anyone? (…)

All that media created a sort of feedback loop. Because people were seeing so many sharks on television and reading about them, the “availability” heuristic was screaming at them that sharks were an imminent threat.

“Certainly anytime we have a situation like that where there’s such overwhelming media attention, it’s going to leave a memory in the population,” says George Burgess, curator of the International Shark Attack File at the Florida Museum of Natural History, who fielded 30 to 40 media calls a day that summer. “Perception problems have always been there with sharks, and there’s a continued media interest in vilifying them. It makes a situation where the risk perceptions of the populace have to be continually worked on to break down stereotypes. Anytime there’s a big shark event, you take a couple steps backward, which requires scientists and conservationists to get the real word out.”

Then again, getting out the real word comes with its own risks—like the risk of getting the real word wrong. Misinformation is especially toxic to risk perception because it can reinforce generalized confirmation biases and erode public trust in scientific data. As scientists studying the societal impact of the Chernobyl meltdown have learned, doubt is difficult to undo. In 2006, 20 years after reactor number 4 at the Chernobyl nuclear power plant was encased in cement, the World Health Organization (WHO) and the International Atomic Energy Agency released a report compiled by a panel of 100 scientists on the long-term health effects of the level 7 nuclear disaster and future risks for those exposed. Among the 600,000 recovery workers and local residents who received a significant dose of radiation, the WHO estimates that up to 4,000 of them, or 0.7 percent, will develop a fatal cancer related to Chernobyl. For the 5 million people living in less contaminated areas of Ukraine, Russia, and Belarus, radiation from the meltdown is expected to increase cancer rates less than 1 percent. (…)

During the year following the September 11 attacks, millions of Americans opted out of air travel and slipped behind the wheel instead. While they crisscrossed the country, listening to breathless news coverage of anthrax attacks, extremists, and Homeland Security, they faced a much more concrete risk. All those extra cars on the road increased traffic fatalities by nearly 1,600. Airlines, on the other hand, recorded no fatalities.

It is unlikely that our intellect can ever paper over our gut reactions to risk. But a fuller understanding of the science is beginning to percolate into society. Earlier this year, David Ropeik and others hosted a conference on risk in Washington, D.C., bringing together scientists, policy makers, and others to discuss how risk perception and communication impact society. “Risk perception is not emotion and reason, or facts and feelings. It’s both, inescapably, down at the very wiring of our brain,” says Ropeik. “We can’t undo this. What I heard at that meeting was people beginning to accept this and to realize that society needs to think more holistically about what risk means.”

Ropeik says policy makers need to stop issuing reams of statistics and start making policies that manipulate our risk perception system instead of trying to reason with it. Cass Sunstein, a Harvard law professor who is now the administrator of the White House Office of Information and Regulatory Affairs, suggests a few ways to do this in his book Nudge: Improving Decisions About Health, Wealth, and Happiness, published in 2008. He points to the organ donor crisis in which thousands of people die each year because others are too fearful or uncertain to donate organs. People tend to believe that doctors won’t work as hard to save them, or that they won’t be able to have an open-
casket funeral (both false). And the gory mental images of organs being harvested from a body give a definite negative affect to the exchange. As a result, too few people focus on the lives that could be saved. Sunstein suggests—controversially—“mandated choice,” in which people must check “yes” or “no” to organ donation on their driver’s license application. Those with strong feelings can decline. Some lawmakers propose going one step further and presuming that people want to donate their organs unless they opt out.

In the end, Sunstein argues, by normalizing organ donation as a routine medical practice instead of a rare, important, and gruesome event, the policy would short-circuit our fear reactions and nudge us toward a positive societal goal. It is this type of policy that Ropeik is trying to get the administration to think about, and that is the next step in risk perception and risk communication. “Our risk perception is flawed enough to create harm,” he says, “but it’s something society can do something about.””

Jason Daley, What You Don’t Know Can Kill You, Discover Magazine, Oct 3, 2011. (Illustration: SteveCarroll, The Economist)

See also:

Daniel Kahneman on thinking ‘Fast And Slow’: How We Aren’t Made For Making Decisions
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Daniel Kahneman: How cognitive illusions blind us to reason, The Observer, 30 October 2011 
Daniel Kahneman on the riddle of experience vs. memory
The irrational mind - David Brooks on the role of emotions in politics, policy, and life

Jan
6th
Fri
permalink

Why Do Languages Die? Urbanization, the state and the rise of nationalism

       

"The history of the world’s languages is largely a story of loss and decline. At around 8000 BC, linguists estimate that upwards of 20,000 languages may have been in existence. Today the number stands at 6,909 and is declining rapidly. By 2100, it is quite realistic to expect that half of these languages will be gone, their last speakers dead, their words perhaps recorded in a dusty archive somewhere, but more likely undocumented entirely. (…)

The problem with globalization in the latter sense is that it is the result, not a cause, of language decline. (…) It is only when the state adopts a trade language as official and, in a fit of linguistic nationalism, foists it upon its citizens, that trade languages become “killer languages.” (…)

Most importantly, what both of the above answers overlook is that speaking a global language or a language of trade does not necessitate the abandonment of one’s mother tongue. The average person on this planet speaks three or four languages. (…)

The truth is, most people don’t “give up” the languages they learn in their youth. (…) To wipe out a language, one has to enter the home and prevent the parents from speaking their native language to their children.

Given such a preposterous scenario, we return to our question — how could this possibly happen?

One good answer is urbanization. If a Gikuyu and a Giryama meet in Nairobi, they won’t likely speak each other’s mother tongue, but they very likely will speak one or both of the trade languages in Kenya — Swahili and English. Their kids may learn a smattering of words in the heritage languages from their parents, but by the third generation any vestiges of those languages in the family will likely be gone. In other cases, extremely rural communities are drawn to the relatively easier lifestyle in cities, until sometimes entire villages are abandoned. Nor is this a recent phenomenon.

The first case of massive language die-off was probably during the Agrarian (Neolithic) Revolution, when humanity first adopted farming, abandoned the nomadic lifestyle, and created permanent settlements. As the size of these communities grew, so did the language they spoke. But throughout most of history, and still in many areas of the world today, 500 or fewer speakers per language has been the norm. Like the people who spoke them, these languages were constantly in flux. No language could grow very large, because the community that spoke it could only grow so large itself before it fragmented. The language followed suit, soon becoming two languages. Permanent settlements changed all this, and soon larger and larger populations could stably speak the same language. (…)

"In primitive times every migration causes not only geographical but also intellectual separation of clans and tribes. Economic exchanges do not yet exist; there is no contact that could work against differentiation and the rise of new customs. The dialect of each tribe becomes more and more different from the one that its ancestors spoke when they were still living together. The splintering of dialects goes on without interruption. The descendants no longer understand one other.… A need for unification in language then arises from two sides. The beginnings of trade make understanding necessary between members of different tribes. But this need is satisfied when individual middlemen in trade achieve the necessary command of language.”

Ludwig von Mises, Nation, State, and Economy (Online edition, 1919; 1983), Ludwig von Mises Institute, p. 46–47.

Thus urbanization is an important factor in language death. To be sure, the wondrous features of cities that draw immigrants — greater economies of scale, decreased search costs, increased division of labor — are all made possible with capitalism, and so in this sense languages may die for economic reasons. But this is precisely the type of language death that shouldn’t concern us (unless you’re a linguist like me), because urbanization is really nothing more than the demonstrated preferences of millions of people who wish to take advantage of all the fantastic benefits that cities have to offer.

In short, these people make the conscious choice to leave an environment where network effects and sociological benefits exist for speaking their native language, and exchange it for a greater range of economic possibilities, but where no such social benefits for speaking the language exist. If this were the only cause of language death — or even just the biggest one — then there would be little more to say about it. (…)

Far too many well-intentioned individuals are too quick to substitute their valuations for those of the last speakers of indigenous languages this way. Were it up to them, these speakers would be resigned to misery and poverty and deprived of participation in the world’s advanced economies in order that their language might be passed on. To be sure, these speakers themselves often fall victim to the mistaken ideology that one language necessarily displaces or interferes with another.

Although the South African Department of Education is trying to develop teaching materials in the local African languages, for example, many parents are pushing back; they want their children taught only in English. In Dominica, the parents go even further and refuse to even speak the local language, Patwa, to their children.[1] Were they made aware of the falsity of this notion of language displacement, perhaps they would be less quick to stop speaking their language to their children. But the decision is ultimately theirs to make, and theirs alone.

Urbanization, however, is not the only cause of language death. There is another that, I’m sad to say, almost none of the linguists who work on endangered languages give much thought to, and that is the state. The state is the only entity capable of reaching into the home and forcibly altering the process of language socialization in an institutionalized way.

How? The traditional method was simply to kill or remove indigenous and minority populations, as was done as recently as 1923 in the United States in the last conflict of the Indian War. More recently this happens through indirect means — whether intentional or otherwise — the primary method of which has been compulsory state schooling.

There is no more pernicious assault on the cultural practices of minority populations than a standardized, Anglified, Englicized compulsory education. It is not just that children are forcibly removed from the socialization process in the home, required to speak an official language and punished (often corporally) for doing otherwise. It is not just that schools redefine success, away from those things valued by the community, and towards those things that make someone a better citizen of the state. No, the most significant impact of compulsory state education is that it ingrains in children the idea that their language and their culture is worthless, of no use in the modern classroom or society, and that it is something that merely serves to set them apart negatively from their peers, as an object of their vicious torment.

But these languages clearly do have value, if for no other reason than simply because people value them. Local and minority languages are valued by their speakers for all sorts of reasons, whether it be for use in the local community, communicating with one’s elders, a sense of heritage, the oral and literary traditions of that language, or something else entirely. Again, the praxeologist is not in a position to evaluate these beliefs. The praxeologist merely notes that free choice in language use and free choice in association, one not dictated by the edicts of the state, will best satisfy the demand of individuals, whether for minority languages or lingua francas. What people find useful, they will use.

By contrast, the state values none of these things. For the state, the goal is to bind individuals to itself, to an imagined homogeneous community of good citizens, rather than their local community. National ties trump local ones in the eyes of the state. Free choice in association is disregarded entirely. And so the state forces many indigenous people to become members of a foreign community, where they are a minority and their language is scorned, as in the case of boarding schools. Whereas at home, mastering the native language is an important part of functioning in the community and earning prestige, and thus something of value, at school it becomes a black mark and a detriment. Given the prisonlike way schools are run, and how they exhibit similar intense (and sometimes dangerous) pressures from one’s peers, minority-language-speaking children would be smart to disassociate themselves as quickly as possible from their cultural heritage.

Mises himself, though sometimes falling prey to common fallacies regarding language like linguistic determinism and ethnolinguistic isomorphism, was aware of this distinction between natural language decline and language death brought on by the state. (…)

This is precisely what the Bureau of Indian Affairs accomplished by coercing indigenous children into attending boarding schools. Those children were cut off from their culture and language — their nation — until they had effectively assimilated American ideologies regarding minority languages, namely, that English is good and all else is bad.

Nor is this the only way the state affects language. The very existence of a modern nation-state, and the ideology it encompasses, is antithetical to linguistic diversity. It is predicated on the idea of one state, one nation, one people. In Nation, State, and Economy, Mises points out that, prior to the rise of nationalism in the 17th and 18th centuries, the concept of a nation did not refer to a political unit like state or country as we think of it today.

A “nation” instead referred to a collection of individuals who share a common history, religion, cultural customs and — most importantly — language. Mises even went so far as to claim that “the essence of nationality lies in language.”[2] The “state” was a thing apart, referring to the nobility or princely state, not a community of people (hence Louis XIV’s famous quip, “L’état c’est moi.”).[3] In that era, a state might consist of many nations, and a nation might subsume many states.

The rise of nationalism changed all this. As Robert Lane Greene points out in his excellent book, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity,

The old blurry linguistic borders became inconvenient for nationalists. To build nations strong enough to win themselves a state, the people of a would-be nation needed to be welded together with a clear sense of community. Speaking a minority dialect or refusing to assimilate to a standard wouldn’t do.[4]

Mises himself elaborated on this point. Despite his belief in the value of a liberal democracy, which would remain with him for the rest of his life, Mises realized early on that the imposition of democracy over multiple nations could only lead to hegemony and assimilation:

In polyglot territories, therefore, the introduction of a democratic constitution does not mean the same thing at all as introduction of democratic autonomy. Majority rule signifies something quite different here than in nationally uniform territories; here, for a part of the people, it is not popular rule but foreign rule. If national minorities oppose democratic arrangements, if, according to circumstances, they prefer princely absolutism, an authoritarian regime, or an oligarchic constitution, they do so because they well know that democracy means the same thing for them as subjugation under the rule of others.[5]

From the ideology of nationalism was also born the principle of irredentism, the policy of incorporating historically or ethnically related peoples into the larger umbrella of a single state, regardless of their linguistic differences. As Greene points out, for example,

By one estimate, just 2 or 3 percent of newly minted “Italians” spoke Italian at home when Italy was unified in the 1860s. Some Italian dialects were as different from one another as modern Italian is from modern Spanish.[6]

This in turn prompted the Italian statesman Massimo D’Agelizo (1798–1866) to say, “We have created Italy. Now we need to create Italians.” And so these Italian languages soon became yet another casualty of the nation-state.

Mises once presciently predicted that,

If [minority nations] do not want to remain politically without influence, then they must adapt their political thinking to that of their environment; they must give up their special national characteristics and their language.[7]

This is largely the story of the world’s languages. It is, as we have seen, the history of the state, a story of nationalistic furor, and of assimilation by force. Only when we abandon this socialist and utopian fantasy of one state, one nation, one people will this story begin to change.”

Danny Hieber is a linguist working to document and revitalize the world’s endangered languages, Why Do Languages Die?, Ludwig von Mises Institute, Jan 04, 2012. (Illustration: The Evolution of the Armenian Alphabet)

[1] Amy L. Paugh, Playing With Languages: Children and Change in a Caribbean Village (2012), Berghahn Books.
[2] Ludwig von Mises, Human Action: A Treatise on Economics (Scholar’s Edition, 2010) Auburn, AL: Ludwig von Mises Institute, p.37.
[3] “I am the state.”
[4] Robert Lane Greene, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity (Kindle Edition, 2011), Delacorte Press, p. 132.
[5] Mises, Nation, State, and Economy, p. 77.
[6] Greene, You Are What You Speak, p. 141.
[7] Mises, Nation, State, and Economy, p. 77.

“Isn’t language loss a good thing, because fewer languages mean easier communication among the world’s people? Perhaps, but it’s a bad thing in other respects. Languages differ in structure and vocabulary, in how they express causation and feelings and personal responsibility, hence in how they shape our thoughts. There’s no single purpose “best” language; instead, different languages are better suited for different purposes.

For instance, it may not have been an accident that Plato and Aristotle wrote in Greek, while Kant wrote in German. The grammatical particles of those two languages, plus their ease in forming compound words, may have helped make them the preeminent languages of western philosophy.

Another example, familiar to all of us who studied Latin, is that highly inflected languages (ones in which word endings suffice to indicate sentence structure) can use variations of word order to convey nuances impossible with English. Our English word order is severely constrained by having to serve as the main clue to sentence structure. If English becomes a world language, that won’t be because English was necessarily the best language for diplomacy.”

— Jared Diamond, American scientist and author, currently Professor of Geography and Physiology at UCLA, The Third Chimpanzee: The Evolution & Future of the Human Animal, Hutchinson Radius, 1991.

See also:

Lists of endangered languages, Wiki
☞ Salikoko S. Mufwene, How Languages Die (pdf), University of Chicago, 2006
☞ K. David Harrison, When Languages Die. The Extinction of the World’s Languages and the Erosion of Human Knowledge (pdf), Oxford University Press, 2007

"It is commonly agreed by linguists and anthropologists that the majority of languages spoken now around the globe will likely disappear within our lifetime. The phenomenon known as language death has started to accelerate as the world has grown smaller. "This extinction of languages, and the knowledge therein, has no parallel in human history. K. David Harrison’s book is the first to focus on the essential question, what is lost when a language dies? What forms of knowledge are embedded in a language’s structure and vocabulary? And how harmful is it to humanity that such knowledge is lost forever?"

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel, Lapidarium notes
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.

Sep
11th
Sun
permalink

Supercomputer predicts revolution: Forecasting large-scale human behavior using global news media tone in time and space


Figure 1. Global geocoded tone of all Summary of World Broadcasts content, 2005. Note: Click on image to see animation.

"Feeding a supercomputer with news stories could help predict major world events, according to US research.

While the analysis was carried out retrospectively, scientists say the same processes could be used to anticipate upcoming conflict. (…)

The study’s information was taken from a range of sources including the US government-run Open Source Centre and BBC Monitoring, both of which monitor local media output around the world.

News outlets which published online versions were also analysed, as was the New York Times' archive, going back to 1945.

In total, Mr Leetaru gathered more than 100 million articles.

Reports were analysed for two main types of information: mood - whether the article represented good news or bad news, and location - where events were happening and the location of other participants in the story.

Mood detection, or “automated sentiment mining” searched for words such as “terrible”, “horrific” or “nice”.

Location, or “geocoding” took mentions of specific places, such as “Cairo” and converted them in to coordinates that could be plotted on a map.

Analysis of story elements was used to create an interconnected web of 100 trillion relationships. (…)

The computer event analysis model appears to give forewarning of major events, based on deteriorating sentiment.

However, in the case of this study, its analysis is applied to things that have already happened.

According to Kalev Leetaru, such a system could easily be adapted to work in real time, giving an element of foresight. (…)

"It looks like a stock ticker in many regards and you know what direction it has been heading the last few minutes and you want to know where it is heading in the next few.

"It is very similar to what economic forecasting algorithms do.” (…)

"The next iteration is going to city level and beyond and looking at individual groups and how they interact.

"I liken it to weather forecasting. It’s never perfect, but we do better than random guessing."

Supercomputer predicts revolution, BBC News, 9 September 2011

Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space

"News is increasingly being produced and consumed online, supplanting print and broadcast to represent nearly half of the news monitored across the world today by Western intelligence agencies. Recent literature has suggested that computational analysis of large text archives can yield novel insights to the functioning of society, including predicting future economic events. Applying tone and geographic analysis to a 30–year worldwide news archive, global news tone is found to have forecasted the revolutions in Tunisia, Egypt, and Libya, including the removal of Egyptian President Mubarak, predicted the stability of Saudi Arabia (at least through May 2011), estimated Osama Bin Laden’s likely hiding place as a 200–kilometer radius in Northern Pakistan that includes Abbotabad, and offered a new look at the world’s cultural affiliations. Along the way, common assertions about the news, such as “news is becoming more negative” and “American news portrays a U.S.–centric view of the world” are found to have merit.”

The emerging field of Culturomics” seeks to explore broad cultural trends through the computerized analysis of vast digital book archives, offering novel insights into the functioning of human society (Michel, et al., 2011). Yet, books represent the “digested history” of humanity, written with the benefit of hindsight. People take action based on the imperfect information available to them at the time, and the news media captures a snapshot of the real–time public information environment (Stierholz, 2008). News contains far more than just factual details: an array of cultural and contextual influences strongly impact how events are framed for an outlet’s audience, offering a window into national consciousness (Gerbner and Marvanyi, 1977). A growing body of work has shown that measuring the “tone” of this real–time consciousness can accurately forecast many broad social behaviors, ranging from box office sales (Mishne and Glance, 2006) to the stock market itself (Bollen, et al., 2011). (…)


Figure 2. Global geocoded tone of all Summary of World Broadcasts content, January 1979–April 2011 mentioning “bin Laden”

Most theories of civilizations feature some approximation of the degree of conflict or cooperation between each group. Figure 3 displays the average tone of all links between cities in each civilization, visualizing the overall “tone” of the relationship between each. Group 1, which roughly encompasses the Asiatic and Australian regions, has largely positive links to the rest of the world and is the only group with a positive connection to Group 4 (Middle East). Group 3 (Africa) has no positive links to any other civilization, while Group 2  (North and South America excluding Canada) has negative links to all but Group 1. As opposed to explicit measures of conflict or cooperation based on armed conflict or trade ties, this approach captures the latent view of conflict and cooperation as portrayed by the world’s news media.

            
Figure 3. Average tone of links between world “civilizations” according to SWB, 1979–2009.

Figure 4 shows the world civilizations according to the New York Times 1945–2005. It divides the world into five civilizations, but paints a very different picture of the world, with a far greater portion of the global landmass arrayed around the United States. Geographic affinity appears to play a far lesser role in this grouping, and the majority of the world is located in a single cluster with the United States. It is clear from comparing the SWB and NYT civilization maps that even within the news media there is no one “universal” set of civilizations, but that each country’s media system may portray the world very differently to its audience. By pooling all of these varied viewpoints together, SWB’s view of the world’s civilizations offers a “crowdsourced” aggregate view of civilization, but it too is likely subject to some innate Western bias.


Figure 4. World “civilizations” according to NYT, 1945–2005. A full–resolution version of this figure is available here

Monitoring first broadcast then print media over the last 70 years, nearly half of the annual output of Western intelligence global news monitoring is now derived from Internet–based news, standing testament to the Web’s disruptive power as a distribution medium. Pooling together the global tone of all news mentions of a country over time appears to accurately forecast its near–term stability, including predicting the revolutions in Egypt, Tunisia, and Libya, conflict in Serbia, and the stability of Saudi Arabia.

Location plays a critical role in news reporting, and “passively crowdsourcing” the media to find the locations most closely associated with Bin Laden prior to his capture finds a 200km – wide swath of northern Pakistan as his most likely hiding place, an area which contains Abbottabad, the city he was ultimately captured in. Finally, the geographic clustering of the news, the way in which it frames localities together, offers new insights into how the world views itself and the “natural civilizations” of the news media.

While heavily biased and far from complete, the news media captures the only cross–national real–time record of human society available to researchers. The findings of this study suggest that Culturomics, which has thus far focused on the digested history of books, can yield intriguing new understandings of human society when applied to the real–time data of news. From forecasting impending conflict to offering insights on the locations of wanted fugitives, applying data mining approaches to the vast historical archive of the news media offers promise of new approaches to measuring and understanding human society on a global scale.”

Kalev Leetaru is Senior Research Scientist for Content Analysis at the Institute for Computing in the Humanities, Arts, and Social Science at the University of Illinois, Center Affiliate of the National Center for Supercomputing Applications, and Research Coordinator at the University of Illinois Cline Center for Democracy. His award-winning work centers on the application of high performance computing to grand challenge problems using news and open sources intelligence. He holds three US patents and more than 50 University Invention Disclosures.

To see full research click University of Illinois at Chicago - UI, Volume 16, Number 9, 5 September 2011

See also:

Culturomics: Quantitative Analysis of Culture Using Millions of Digitized Books

"Construct a corpus of digitized texts containing about 4% of all books ever printed, and then analyze that corpus using advanced software and the investigatory curiosity of thousands, and you get something called "Culturomics," a field in which cultural trends are represented quantitatively.

In this talk Erez Lieberman Aiden and Jean-Baptiste Michel — co-founders of the Cultural Observatory at Harvard and Visiting Faculty at Google — show how culturomics can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology.”

— E. Lieberman Aiden, Harvard Society of Fellows & Jean-Baptiste Michel, FQEB Fellow at Harvard, Culturomics: Quantitative Analysis of Culture Using Millions of Digitized Books, May 10, 2011

See also:

What we learned from 5 million books, TED.com, 2011 (video)

Sep
5th
Mon
permalink

Political science: why rejecting expertise has become a campaign strategy

   

"To be clear. I believe in evolution and trust scientists on global warming. Call me crazy." With that tweet, Jon Huntsman set himself apart from every other candidate in the Republican primary field. Despite his phrasing, Huntsman, who is barely registering in most polls, was clearly hoping that the public would believe most other candidates to be a bit loopy by contrast. (…)

Questions about evolution work in this manner on multiple levels. Obviously, on a scientific level, the evidence for evolution is extremely compelling. If you would rather defer to expertise than study it yourself, every scientific society out there that has voiced an opinion on evolution has supported the science and its place in the biology classroom. Finally, the US court system has determined that creationism and its milder cousin, intelligent design, are inherently religious and therefore cannot be taught as “science” in the public school system. (…)

On the climate side, a number of the candidates have never accepted the expertise of groups like the National Academies of Science; a few others have done so (Gingrich and Pawlenty have both supported policies to limit CO2 emissions) but have since disowned that position. Mitt Romney is no longer sure that the planet is warming at all. And Rick Perry has once more staked out the most extreme position, saying that it’s all just a fraudulent attempt to get grant money. “I think there are a substantial number of scientists who have manipulated data so that they will have dollars rolling into their projects,” he has said. “And I think we are seeing almost weekly, or even daily, scientists are coming forward and questioning the original idea that man-made global warming is what is causing the climate to change.”

Actually, the consensus about anthropogenic climate change doesn’t appear to be changing; the argument that scientists are in it for the money is transparently bogus.

So, what have we learned from this? With the exception of Huntsman, the candidates don’t know science, haven’t bothered to ask someone who does, and, in several cases, don’t even know anything about the settled policy issues (judicial precedent and investigation of claims about fraud). Why would we want these traits in a president?

Actually, some people do

However, the fact is that Huntsman is barely registering in most polls, and the leading candidates in the Republican party are successful in part precisely because they are voicing an opinion that runs counter to expertise. For many in the US, expertise has taken on a negative cultural value; experts are part of an elite that thinks it knows better than the average citizen. (This is accurate, for what it’s worth.) Very few object to that sort of expertise when it comes time to, say, put the space shuttle into orbit, but expertise can become a problem when the experts have reached a consensus that runs against cultural values.

And, for many in our society, scientific expertise has done just that. Abstinence-only sex education has been largely ineffective. Carbon emissions are creating a risk of climate change. Humanity originated via an evolutionary process. All of these findings have threatened various aspects of people’s cultural identity. By rejecting both the science and the expertise behind it, candidates can essentially send a signal that says, “I’m one of you, and I’m with you where it counts.”

This is not some purely partisan phenomenon. On other issues, rejection of scientific information tends to be associated with the political left—the need for animal research and the safety of genetically modified foods spring to mind. These positions, however, are anything but mainstream within the Democratic Party, so candidates have not felt compelled to pander to (or even discuss) them, in most cases. That’s created an awkward asymmetry, one where a single party has a monopoly on public rejection of scientific information and certain kinds expertise.

For Jon Huntsman, that’s a problem. In an ABC News interview, he argued that the leading candidates’ stance would make them unelectable. "The minute that the Republican Party becomes the party—the anti-science party—we have a huge problem. We lose a whole lot of people who would otherwise allow us to win the election in 2012. When we take a position that isn’t willing to embrace evolution, when we take a position that basically runs counter to what 98 of 100 climate scientists have said, what the National Academy of Science—Sciences has said about what is causing climate change and man’s contribution to it, I think we find ourselves on the wrong side of science, and, therefore, in a losing position." (…)

My biggest concern is that, ultimately, Huntsman may be wrong. We’re in an environment where economic concerns will almost certainly dominate the election. And the campaigns will be covered by a press that cares more about the strategy of what a candidate said than its accuracy, a press that thinks it achieves balance by pretending there are two sides on every issue that merit serious consideration. In that environment, it’s entirely possible that the US electorate may not recognize or care much about the implications of a few scientific questions.

Besides, a candidate who rejects science can apparently use that position to attract the support of somewhere above a quarter of the electorate. That’s not a bad start for a presidential campaign.”

John Timmer, a Bachelor of Arts in Biochemistry at Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley, Political science: why rejecting expertise has become a campaign strategy (and why it scares me), ars technica, Sep 5, 2011

"The universe doesn’t care what you believe. The wonderful thing about science is that it doesn’t ask for your faith, it just ask for your eyes."



xkcd, Oct 2011

See also:

☞ Chris Mooney, How liberal and conservative brains are wired differently. Liberals and conservatives don’t just vote differently, they think differently
Paul Nurse, Stamp out anti-science in US politics, New Scientist, 14 Sept 2011
☞ Chris Mooney, Why Republicans Deny Science: The Quest for a Scientific Explanation, The Huffington Post, Jan 11, 2012
☞ John Allen Paulos, Why Don’t Americans Elect Scientists?, NYTimes, Feb 13, 2012.
Study: Conservatives’ Trust in Science Has Fallen Dramatically Since Mid-1970s, American Sociological Association, March 29, 2012.

"Trust in Science Has Also Declined Among People Who Frequently Attend Church (…) While trust in science remained stable among people who self-identified as moderates and liberals in the United States between 1974 and 2010, trust in science fell among self-identified conservatives by more than 25 percent during the same period, according to new research from Gordon Gauchat, a postdoctoral fellow at the University of North Carolina-Chapel Hill’s Cecil G. Sheps Center for Health Services Research.

“You can see this distrust in science among conservatives reflected in the current Republican primary campaign,” said Gauchat, whose study appears in the April issue of the American Sociological Review. “When people want to define themselves as conservatives relative to moderates and liberals, you often hear them raising questions about the validity of global warming and evolution and talking about how ‘intellectual elites’ and scientists don’t necessarily have the whole truth.” (…)

“It also provides evidence that, in the United States, there is a tension between religion and science in some contexts. This tension is evident in public controversies such as that over the teaching of evolution.”

As for why self-identified conservatives were much less likely to trust science in 2010 than they were in the mid-1970s. (…)”

Republican congressman, member of House science committee Paul Broun says evolution, Big Bang theory and embryology are ‘lies straight from the pit of hell’, The Guardian, 6 Oct 2012
Why people believe in strange things, Lapidarium notes

Jul
30th
Sat
permalink

Stewart Brand: ‘Look At the World Through the Eyes Of A Fool’

                                  

Q: Has society become too eager to discard things and ideas?

(…) I think we have become too shortsighted. Everything is moving faster, everybody is multitasking. Investments are made for short-term returns, democracies run on short-term election cycles. Speedy progress is great, but it is also chancy. When everything is moving fast, the future looks like it is next week. But what really counts is the future ten or hundred years from now. And we should also bear in mind that the history that matters is not only yesterday’s news but events from a decade or a century or a millennium ago. To balance that, we want to look at the long term: the last ten thousand years, the next ten thousand years. (…)

When NASA released the first photographs of the earth from space in the 1960s, people changed their frame of reference. We began to think differently about the earth, about our environment, about humanity. (…)

There had been many drawings of the earth from space, just like people made images of cities from above before we had hot-air balloons. But they were all wrong. Usually, images of the earth did not include any clouds, no weather, no climate. They also tended to neglect the shadow that much of the earth is usually in. From most angles, the earth appears as a crescent. Only when the sun is directly behind you would you see the whole planet brightly illuminated against the blackness of space. (…)

The question of framing

I think there is always the question of framing: How do we look at things? The first photos of the earth changed the frame. We began to talk more about “humans” and less about Germans or Americans. We began to start talking about the planet as a whole. That, in a way, gave us the ability to think about global problems like climate change. We did not have the idea of a global solution before. Climate Change is a century-sized problem. Never before has humanity tried to tackle something on such a long temporal scale. Both the large scale and the long timeframe have to be taken seriously.

Q: Do you believe in something like a human identity?

In a way, the ideal breakthrough would be to discover alien life. That would give us a clear sense of our humanity. But even without that, we have done pretty well in stepping outside our usual frame of reference and looking at the planet and at the human race from the outside. That’s nice. I would prefer if we didn’t encounter alien intelligence for a while. (…)

Q: So we have to improve the extrapolations and predictions that we make based on present data sets?

We like to think that we are living in a very violent time, that the future looks dark. But the data says that violence has declined every millennium, every century, every decade. The reduction in cruelty is just astounding. So we should not focus too much on the violence that has marked the twentieth century. The interesting question is how we can continue that trend of decreasing violence into the future. What options are open to us to make the world more peaceful? Those are data-based questions. (…)

Q: When you started to publish the Whole Earth Catalogue in 1968, you said that you wanted to create a database so that “anyone on Earth can pick up a telephone and find out the complete information on anything.” Is that the idea of the internet, before the internet?

Right, I had forgotten about that quote. Isn’t it nice that I didn’t have to go through the work of collecting that information, it just happened organically. Some people say to me that I should revive the catalogue and my answer is: The internet is better than any catalogue or encyclopedia could ever be. (…)

I don’t think the form determines the triviality of information or the level of discussion. By having much more opportunities and much lower costs of online participation, we are in a position to really expand and improve those discourses. (…)

When Nicholas Negroponte said a few years ago that every child in the world needed a laptop computer, he was right. Many people were skeptical of his idea, but they have been proven wrong. When you give internet access to people in the developing world, they immediately start forming educational networks. They expand their horizons, children teach their parents how to read and write. (…)

Q: On the back cover of the 1974 Whole Earth Catalogue, it said something similar: “Stay hungry, stay foolish”. Why?

It proposes that a beginner’s mind is the way to look at new things. We need a combination of confidence and of curiosity. It is a form of deep-seated opportunism that goes to the core of our nature and is very optimistic. I haven’t been killed by my foolishness yet, so let’s keep going, let’s take chances. The phrase expresses that our knowledge is always incomplete, and that we have to be willing to act on imperfect knowledge. That allows you to open your mind and explore. It means putting aside the explanations provided by social constructs and ideologies.

I really enjoyed your interview with Wade Davis. He makes a persuasive case for allowing native cultures to keep their cultures intact. That’s the idea behind the Rosetta Project as well. Most Americans are limited by the fact that they only speak one language. Being multilingual is a first step to being more aware of different perspectives on the world. We should expand our cognitive reach. I think there are many ways to do that: Embrace the internet. Embrace science. Travel a lot. Learn about people who are unlike yourself. I spent much of my twenties with Native American tribes, for example. You miss a lot of important stuff if you only follow the beaten path. If you look at the world through the eyes of a fool, you will see more. But I probably hadn’t thought about all of this back in 1974. It was a very countercultural move.

Q: In politics, we often talk about policies that supposedly have no rational alternative. Is that a sign of the stifling effects of ideology?

Ideologies are stories we like to tell ourselves. That’s fine, as long as we remember that they are stories and not accurate representations of the world. When the story gets in the way of doing the right thing, there is something wrong with the story. Many ideologies involve the idea of evil: Evil people, evil institutions, et cetera. Marvin Minsky has once said to me that the only real evil is the idea of evil. Once you let that go, the problems become manageable. The idea of pragmatism is that you go with the things that work and cast aside lovely and lofty theories. No theory can be coherent and comprehensive enough to provide a direct blueprint for practical actions. That’s the idea of foolishness again: You work with imperfect theories, but you don’t base your life on them.

Q: So “good” is defined in terms of a pragmatic assessment of “what works”?

Good is what creates more life and more options. That’s a useful frame. The opposite of that would not be evil, but less life and fewer options.”

Stewart Brand, American writer, best known as editor of the Whole Earth Catalog, "Look At the World Through the Eyes Of A Fool", The European, 30.05.2011

See also:Whole Earth Catalogue

Jul
26th
Tue
permalink

Minority rules: Scientists discover tipping point for the spread of ideas

“The same mathematics of networks that governs the interactions of molecules in a cell, neurons in a brain, and species in an ecosystem can be used to understand the complex interconnections between people, the emergence of group identity, and the paths along which information, norms, and behavior spread from person to person to person.” James Fowler is a political scientist at the University of California

"Scientists at Rensselaer Polytechnic Institute have found that when just 10 percent of the population holds an unshakable belief, their belief will always be adopted by the majority of the society. The scientists, who are members of the Social Cognitive Networks Academic Research Center (SCNARC) at Rensselaer, used computational and analytical methods to discover the tipping point where a minority belief becomes the majority opinion. The finding has implications for the study and influence of societal interactions ranging from the spread of innovations to the movement of political ideals.

"When the number of committed opinion holders is below 10 percent, there is no visible progress in the spread of ideas. It would literally take the amount of time comparable to the age of the universe for this size group to reach the majority," said SCNARC Director Boleslaw Szymanski, the Claire and Roland Schmitt Distinguished Professor at Rensselaer. "Once that number grows above 10 percent, the idea spreads like flame."

         

In this visualization, we see the tipping point where minority opinion (shown in red) quickly becomes majority opinion. Over time, the minority opinion grows. Once the minority opinion reached 10 percent of the population, the network quickly changes as the minority opinion takes over the original majority opinion (shown in green). Credit: SCNARC/Rensselaer Polytechnic Institute

As an example, the ongoing events in Tunisia and Egypt appear to exhibit a similar process, according to Szymanski. “In those countries, dictators who were in power for decades were suddenly overthrown in just a few weeks.”

The findings were published in the July 22, 2011, early online edition of the journal Physical Review E in an article titled Social consensus through the influence of committed minorities.”

An important aspect of the finding is that the percent of committed opinion holders required to shift majority opinion does not change significantly regardless of the type of network in which the opinion holders are working. In other words, the percentage of committed opinion holders required to influence a society remains at approximately 10 percent, regardless of how or where that opinion starts and spreads in the society.

To reach their conclusion, the scientists developed computer models of various types of social networks. One of the networks had each person connect to every other person in the network. The second model included certain individuals who were connected to a large number of people, making them opinion hubs or leaders. The final model gave every person in the model roughly the same number of connections. The initial state of each of the models was a sea of traditional-view holders. Each of these individuals held a view, but were also, importantly, open minded to other views.

Once the networks were built, the scientists then “sprinkled” in some true believers throughout each of the networks. These people were completely set in their views and unflappable in modifying those beliefs. As those true believers began to converse with those who held the traditional belief system, the tides gradually and then very abruptly began to shift.

"In general, people do not like to have an unpopular opinion and are always seeking to try locally to come to consensus. We set up this dynamic in each of our models," said SCNARC Research Associate and corresponding paper author Sameet Sreenivasan. To accomplish this, each of the individuals in the models “talked” to each other about their opinion. If the listener held the same opinions as the speaker, it reinforced the listener’s belief. If the opinion was different, the listener considered it and moved on to talk to another person. If that person also held this new belief, the listener then adopted that belief.

"As agents of change start to convince more and more people, the situation begins to change," Sreenivasan said. “People begin to question their own views at first and then completely adopt the new view to spread it even further. If the true believers just influenced their neighbors, that wouldn’t change anything within the larger system, as we saw with percentages less than 10.”

The research has broad implications for understanding how opinion spreads. “There are clearly situations in which it helps to know how to efficiently spread some opinion or how to suppress a developing opinion,” said Associate Professor of Physics and co-author of the paper Gyorgy Korniss. “Some examples might be the need to quickly convince a town to move before a hurricane or spread new information on the prevention of disease in a rural village.”“

Minority rules: Scientists discover tipping point for the spread of ideas, EurekaAlert, 25 July 2011

See also:

The Story of Networks
☞ Manuel Castells, Network Theories of Power - video lecture, USCAnnenberg

Jul
15th
Fri
permalink

The Pursuit of Happiness. People who take part in their communities and governments are happier than those who don’t


"Today, economics, with its misapprehension that human beings are cost/benefit calculating machines, has come to dominate our politics and our lives. We’re left with an unnatural obsession with individualism, a single-minded focus on wealth over work, and an anti-government animus. (…)

Economists and leaders have begun to search for alternative ways to value the lives of individuals and evaluate the success of nations. Since many of the questions they’re raising are philosophical, voices from the past may be helpful.

The Greeks, for instance, were very interested in well being. Aristotle thought happiness was the goal of human activity. For him, true happiness was something more than simply “Eat, drink, and be merry,” or even the honor of high position. Real satisfaction didn’t depend on the pleasures of the senses or what others thought of you. You could find genuine happiness only in a life of virtue and just actions. President Kennedy alluded to Aristotle when he defined happiness as “the full use of one’s talents along the lines of excellence.”

For the Greeks, excellence could be manifest only in a city or a community. Since human beings were political animals, the best way to exercise virtue and justice was within the institutions of a great city (the polis). Only beasts and gods could live alone. A solitary person was not fully human. In fact, the Greek word “idiot” means a private person, someone who is not engaged in public life. It was only in a fair and just society that can men and women could be fully human—and happy.

This is what the American Revolution was all about. Jefferson declared that the pursuit of happiness was an inalienable right, along with life and liberty. The story goes that Jefferson, on the advice of Benjamin Franklin, substituted the phrase “pursuit of happiness” for the word “property,” which was favored by George Mason. Franklin thought that “property” was too narrow a notion.

But what exactly did “happiness” mean to the colonists? It was a topic of lively discussion in pubs, public squares, broadsheets, and books. Was happiness individual prosperity, or something else?

Conservatives argue that the American Revolution exalted the individual. Certainly, the colonists didn’t want the British Crown telling them what to do. But the Revolution wasn’t just about getting the government out of people’s lives so the Founders could pursue their private desires.

George Washington and Thomas Jefferson had nice houses. They could have enjoyed contented private lives. But it was not just about their property. They believed that you attained happiness, not merely through the goods you accumulated, or in your private life, but through the good that you did in public. People were happy when they controlled their destiny, when their voice was heard, when they participated in public events, when the government did not do things to them, or even for them, but with them.

The American revolutionaries wanted to have their voice heard and to participate in government. After all, their slogan was not “No taxation”—which is such a popular rallying cry today—but “No taxation without representation.” Representation was critical to happiness. The Founders’ long recitation of grievances set out the numerous ways in which they couldn’t control their destiny. They were subject to England, while they wished to be citizens of America. As citizens, they were able to take control of their government and create a just state where the rule of law was respected, domestic tranquility assured, and defense maintained.

As political animals, human beings need a city, a nation, in which to flourish. People can develop their talents only in society. The good society nurtures many talents, and the political system makes that possible by what it rewards and encourages. (…)

This brings me to jobs. After the crash of 2009, banks have been saved, corporations are prospering, and people are still unemployed. My father would have seen something wrong with this picture. He believed having a good job was the key to happiness. “The root problem,” he said, “is in the fact of dependency and uselessness itself. Unemployment means having nothing to do, which means nothing to do with the rest of us. To be without work, to be without use to one’s fellow citizens, is to be in truth the invisible man of whom Ralph Ellison wrote.” (…) “I helped to build this city. I am a participant in its great public ventures. I count.”

It’s not only through our jobs but through participating in public life that we help build the city. In fact, research shows that people who take part in political activities such as voting, advocating for laws, and helping to make government work for themselves and their community are happier than those who don’t. (…)”

Kathleen Kennedy Townsend served as Maryland’s first woman lieutenant governor. She now works in finance in Washington, a member of the Kennedy family, The Pursuit of Happiness: What the Founders Meant—And Didn’t, The Atlantic, June 20, 2011 (Image Credit: Wikimedia Commons)

See also:

☞ Kathleen Kennedy Townsend, What Makes Life Worthwhile? GDP Won’t Tell You, The Atlantic, June 13, 2011
Get Politically Engaged, Get Happy?, Miller-McCune, Feb 14, 2011

Jul
3rd
Sun
permalink

George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’

    

Metaphor is a fundamental mechanism of mind, one that allows us to use what we know about our physical and social experience to provide understanding of countless other subjects. Because such metaphors structure our most basic understandings of our experience, they are “metaphors we live by”—metaphors that can shape our perceptions and actions without our ever noticing them. (…)

We are neural beings, (…) our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything – only what our embodied brains permit. (…)

The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.”

George Lakoff, cited in Daniel Lende, Brainy Trees, Metaphorical Forests: On Neuroscience, Embodiment, and Architecture, Neuroanthropology, Jan 10, 2012.

"For Lakoff, language is not a neutral system of communication, because it is always based on frames, conceptual metaphors, narratives, and emotions. Political thought and language is inherently moral and emotional. (…)

The way people really reason — Real Rationality — coming new understandings of the brain—something that up-to-date marketers have already done. Enlightenment reason, we now know, was a false theory of rationality.

Most thought is unconscious. It doesn’t work by mathematical logic. You can’t reason directly about the world—because you can only conceptual what your brain and body allow, and because ideas are structured using frames.” Lakoff says. “As Charles Fillmore has shown, all words are defined in terms of conceptual frames, not in terms of some putative objective, mind-free world.”

“People really reason using the logic of frames, metaphors, and narratives, and real decision making requires emotion, as Antonio Damasio showed in Descartes’ Error.” 

“A lot of reason does not serve self interest, but is rather about empathizing with and connecting to others.”

People Don’t Decide Using ‘Just the Facts’

Contemporary explanatory journalism, in particular, is prone to the false belief that if the facts are presented to people clearly enough, they will accept and act upon them, Lakoff says. “In the ‘marketplace of ideas’ theory,  that the best factually based logical argument will always win. But this doesn’t actually happen.”

“Journalists always wonder, ‘We’ve reported on all the arguments, why do people vote wrong?’” Lakoff says. “They’ve missed the main event.”

Many journalists think that “framing” a story or issue is “just about choices of words and manipulation,” and that one can report factually and neutrally without framing. But language itself isn’t neutral. If you study the way the brain processes language, Lakoff says, “every word is defined with respect to frames. You’re framing all the time.” Morality and emotion are already embedded in the way people think and the way people perceive certain words—and most of this processing happens unconsciously. “You can only learn things that fit in with what your brain will allow,” Lakoff says.

A recent example? The unhappy phrase “public option.”

“When you say public, it means ‘government’ to conservatives,” Lakoff explains. “When you say ‘option,’ it means two things: it’s not necessary, it’s just an ‘option,’ and secondly it’s a public policy term, a bureaucratic term. To conservatives, ‘public option’ means government bureaucracy, the very worst thing you could have named this. They could have called it the America Plan. They could have called it doctor-patient care.”

According to Lakoff, because of the conservative success in shaping public discourse through their elaborate communication system, the most commonly used words often have been given a conservative meaning. “Tax relief,” for example, suggests that taxation is an affliction to be relieved.

Don’t Repeat the Language Politicians Use: Decode It

Instead of simply adopting the language politicians use to frame an issue, Lakoff argues, journalists need to analyze the language political figures use and explain the moral content of particular words and arguments.

That means, for example, not just quoting a politician about whether a certain policy infringes or supports American “liberty,” but explaining what he or she means by “liberty,” how this conception of liberty fits into the politician’s overall moral outlook, and how it contrasts with other conceptions of liberty.

It also means spelling out the full implications of the metaphors politicians choose. In the recent coverage of health care reform, Lakoff says, one of the “hidden metaphors” that needed to be explored was whether politicians we’re talking about healthcare as a commodity or as a necessity and a right.

Back on the 2007 presidential campaign trail, Lakoff pointed out, Rudy Giuliani called Obama’s health care plans “socialist,” while he himself compared buying health care to buying a flatscreen tv set, using the metaphor of health care as a commodity, not a necessity. A few liberal bloggers were outraged, but several newspapers reported his use of the metaphor without comment or analysis, rather than exploring what it revealed about Giuliani’s worldview. (…)

A Dictionary of the Real Meanings of Words

What would a nonpartisan explanatory journalism be like? To make nonpartisan decoding easier, Lakoff thinks journalists should create an online dictionary of the different meanings of words—“ not just a glossary, but a little Wikipedia-like website,” as he puts it. This site would have entries to explain the differences between the moral frameworks of conservatives and progressives, and what they each typically mean when they say words like “freedom.” Journalists across the country could link to the site whenever they sensed a contested word.

A project like this would generate plenty of resistance, Lakoff acknowledges. “What that says is most people don’t know what they think. That’s extremely scary…the public doesn’t want to be told, ‘You don’t know what you think.’” The fact is that about 98 percent of thought is unconscious.”

But, he says, people are also grateful when they’re told what’s really going on, and why political figures reason as they do. He would like to see a weekly column in the New York Times and other newspapers decoding language and framing, and analyzing what can and cannot be said politically, and he’d also like to see cognitive science and the study of framing added to journalism school curricula.

Ditch Objectivity, Balance, and ‘The Center ‘

Lakoff has two further sets of advice for improving explanatory journalism. The first is to ditch journalism’s emphasis on balance. Global warming and evolution are real. Unscientific views are not needed for “balance.”

“The idea that truth is balanced, that objectivity is balanced, is just wrong,” Lakoff says. Objectivity is a valuable ideal when it means unbiased reporting, Lakoff argues. But too often, the need for objectivity means that journalists hide their own judgments of an issue behind “public opinion.” The journalistic tradition of “always having to get a quote from somebody else” when the truth is obvious is foolish, Lakoff says.

So is the naïve reporting of poll data, since poll results can change drastically depending on the language and the framing of the questions. The framng of the questions should be part of reporting on polls.

Finally, Lakoff’s research suggests that many Americans, perhaps 20 per cent, are “biconceptuals” who have both conservative and liberal moral systems in their brains, but apply them to different issues. In some cases they can switch from one ideological position to another, based on the way an issue is framed. These biconceptuals occupy the territory that’s usually labeled “centrist.” “There isn’t such a thing as ‘the center.’ There are just people who are conservative on some issues and liberal on others, with lots of variations occurring. Journalists accept the idea of a “center” with its own ideology, and that’s just not the case,” he says.

Journalists tell “stories.” Those stories are often narratives framed from a particular moral or political perspective. Journalists need to be more upfront about the moral and political underpinnings of the stories they write and the angles they choose.

Journalism Isn’t Neutral–It’s Based on Empathy

“Democracy is based on empathy, with people not just caring, but acting on that care —having social as well as personal responsibility…That’s a view that many journalists have. That’s the reason they become journalists rather than stockbrokers. They have a certain view of democracy. That’s why a lot of journalists are liberals. They actually care about how politics can hurt people, about the social causes of harm. That’s a really different view than the conservative view: if you get hurt and you haven’t taken personal responsibility, then you deserve to get hurt—as when you sign on to a mortgage you can’t pay. Investigative journalism is very much an ethical enterprise, and I think journalists have to ask themselves, ‘What is the ethics behind the enterprise?’ and not be ashamed of it.” Good investigative journalism uncovers real facts, but is done, and should be done, with a moral purpose.

To make a moral story look objective, “journalists tend to pin moral reactions on other people: ‘I’m going to find someone around here who thinks it’s outrageous’…This can make outrageous moral action into a matter of public opinion rather than ethics.”

In some ways, Lakoff’s suggestions were in line with the kind of journalism that one of our partners,  the non-profit investigative journalism outlet ProPublica, already does. In its mission statement, ProPublica, makes its commitment to “moral force” explicit. “Our work focuses exclusively on truly important stories, stories with ‘moral force,’” the statement reads. “We do this by producing journalism that shines a light on exploitation of the weak by the strong and on the failures of those with power to vindicate the trust placed in them.”

He emphasized the importance of doing follow-ups to investigative stories, rather than letting the public become jaded by a constant succession of outrages that flare on the front page and then disappear. Most of ProPublica’s investigations are ongoing and continually updated on its site.

Cognitive Explanation:’ A Different Take on ProPublica’s Mission 

But Lakoff also had some very nontraditional suggestions about what it would mean for ProPublica to embark on a different kind of explanatory journalism project. “There are two different forms of explanatory journalism. One is material explanation — the kind of investigative reporting now done at ProPublica: who got paid what by whom, what actions resulted in harm, and so on. All crucial,” he noted. “But equally crucial, and not done, is cognitive and communicative explanation.”

“Cognitive explanation depends on what conceptual system lies behind political positions on issues and how the working of people’s brains explains their political behavior. For example, since every word of political discourse evokes a frame and the moral system behind it, the superior conservative communication system reaches most Americans 24/7/365. The more one hears conservative language and not liberal language, the more the brains of those listening get changed. Conservative communication with an absence of liberal communication exerts political pressure on Democrats whose constituents hear conservative language all day every day. Explanatory journalism should be reporting on the causal effects of conservative framing and the conservative communicative superiority.”

“ProPublica seems not to be explicit about conflicting views of what constitutes ‘moral force.’ ProPublica does not seem to be covering the biggest story in the country, the split over what constitutes morality in public policy. Nor is it clear that ProPublica studies the details of framing that permeate public discourse. Instead, ProPublica assumes a view of “moral force” in deciding what to cover and how to cover it.

“For example, ProPublica has not covered the difference in moral reasoning behind the conservative and progressive views on tax policy, health care, global warming and energy policy, and so on for major issue after major issue.

“ProPublica also is not covering a major problem in policy-making — the assumption of classical views of rationality and the ways they have been scientifically disproved in the cognitive and brain sciences.

“ProPublica has not reported on the disparity between the conservative and liberal communication systems, nor has it covered the globalization of conservatism — the international exportation of American conservative strategists, framing, training, and communication networks.

“When ProPublica uncovers facts about organ transplants and nursing qualifications, that’s fine. But where is ProPublica on the reasons for the schisms in our politics? Explanatory journalism demands another level of understanding.

“ProPublica, for all its many virtues, has room for improvement, in much the same way as journalism in general — especially in explanatory journalism. Cognitive and communicative explanation must be added to material explanation.”

What Works In the Brain: Narrative & Metaphor

As for creating Explanatory Journalism that resonates with the way people process information, Lakoff suggested two familiar tools: narrative and metaphor.

The trick to finding the right metaphors for complicated systems, he said, is to figure out what metaphors the experts themselves use in the way they think. “Complex policy is usually understood metaphorically by people in the field,” Lakoff says. What’s crucial is learning how to distinguish the useful frames from the distorting or overly-simplistic ones.

As for explaining policy, Lakoff says, “the problem with this is that policy is made in a way that is not understandable…Communication is always seen as last, as the tail on the dog, whereas if you have a policy that people don’t understand, you’re going to lose. What’s the point of trying to get support for a major health care reform if no one understands it?

One of the central problems with policy, Lakoff says, is that policy-makers tend to take their moral positions so much for granted that the policies they develop seem to them like the “merely practical” things to do.

Journalists need to restore the real context of policy, Lakoff says, by trying “to get people in the government and policy-makers in the think tanks to understand and talk about what the moral basis of their policy is, and to do this in terms that are understandable.”

George Lakoff, American cognitive linguist and professor of linguistics at the University of California, Berkeley, interviewed by Lois Beckett in Explain yourself: George Lakoff, cognitive linguist, explainer.net, 31 January, 2011 (Illustration source)

See also:

Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks, Lapidarium notes
☞ Metaphor tag on Lapidarium notes

Apr
14th
Thu
permalink

The irrational mind - David Brooks on the role of emotions in politics, policy, and life

New York Times columnist David Brooks said that scientists who study the mind, rather than theologians or philosophers, are yielding the most interesting answers to questions of what constitutes character, ethics, and virtue.

“Why do the most socially attuned people on earth — the people I cover [politicians] — make the most dehumanized decisions?” (…)

“We’ve inherited a view of ourselves that we’re divided selves, that we have reason over here and emotion over here,” Brooks said. “We value things we can quantify … and we tend not to devalue, but to be inarticulate about the rest.

What scientific research is showing, however is that the irrational self can hardly be suppressed. Only by embracing that perhaps-unquantifiable need for community, relationships, and other unconscious desires, he told the crowd, can people make headway in pressing problems ranging from education to foreign wars. (…)

Most thinking is unconscious, he said, and studies are showing that emotions, or unconscious responses, are most likely the foundation of the reasoning people use to make decisions.

“Emotions assign values to things,” Brooks said. “If you don’t have that valuation system, then your decision-making landscape is hopelessly flat.”

Furthermore, he added, research suggests that humans are “deeply interdependent creatures” who learn and even forge our personalities from the people we surround ourselves with.

By taking advances in our understanding of human nature into account, Brooks said, politicians and other leaders could benefit from an entirely new range of skills that have little to do with academic expertise or traditional intelligence. He cited research that the ability to learn from others, to monitor biases and shortcomings in our own thinking, to pick out larger patterns from a jumble of information, and to maximize self-restraint, among other traits, play a larger role in our success in work and in life than they’re commonly given credit for. (…)

“People start businesses that should have never been started,” Bazerman said. “We fight wars because we want to get the bad guys. … We elect George W. Bush because we’d like to have a beer with him.”

And if policies fail to address social problems, then politicians’ preference for hard data over human experience probably isn’t to blame, said David Kennedy, professor of law and director of the Institute for Global Law and Policy at Harvard Law School. Even if politicians embrace a broader idea of human nature, the process of policymaking will remain conflicted and confusing.

“I don’t see the policy process as the best place where people figure out how to solve a problem,” he said.”

— Katie Koch writing about David Brooks in Learning to love the irrational mind, Harvard Gazette, April 13, 2011.

See also:

☞ Dan Ariely, Your irrational mind
A risk-perception: What You Don’t Know Can Kill You

Apr
6th
Wed
permalink

Scott Atran on Why War Is Never Really Rational

 
                       Jacques-Louis David, The Intervention of the Sabine Women, (1799)

"The art of war," Adam Smith wrote in The Wealth of Nations, “is certainly the noblest of all arts.” In every culture, war is considered society’s most noble endeavor (recent threat of nuclear war and mass annihilation has made a slight dent in this universal passion), although what is considered good and noble in one society may well be considered evil and bad in others. War is usually much better than peace at defining who is the group, what are its boundaries, and what it stands for. War is also more compelling and effective in generating solidarity with something larger and more lasting than ourselves. War compresses history and dramatically changes its course. There is urgency, excitement, ecstasy, and altruistic exaltation in war, a mystic feeling of solidarity with something greater than oneself: a tribe, a nation, a movement, Humanity. That’s also why cable news so loves it.

War is what most clearly defines who we are, for better or worse. And it has always been that way.

The key justification that President Obama evoked in going to war in Libya, and anywhere else around the globe where America’s survival and safety are not directly threatened, is that failure to act “would have been a betrayal of who we are.” The message, to remind ourselves and the world, is that America protects people who are menaced with annihilation for wanting freedom.

Obama’s erstwhile presidential rival, Senator John McCain, countered that while this moral imperative may be laudable, “the reason why we wage wars is to achieve the results of the policy that we state.” And that policy, as the president himself proclaimed, is that “Gaddafi must go.”

Politicians and pundits across the ideological spectrum intone that the military mission remains murky, even contradictory, because as Adm. Mike Mullen, Chairman of the Joint Chiefs of Staff, put it: “the goals of this [military] campaign aren’t… about seeing [Gaddafi] go. It’s about eliminating his ability to kill his own people.” So what is the sense of fighting to prevent Gaddafi from massacring his people if, as Adm. Mullen conceded, “certainly, potentially, one outcome” is that the dictator remain in power, potentially to kill again?

Yet the inconsistency between war as a moral imperative versus political policy runs way wider and deeper than the Libya conflict. It goes to the heart of human nature and the character of society. For despite the popular delusion that war is, or ought to be, primarily a matter of political strategy and pragmatic execution, it almost never is. Squaring the circle of war and politics, morality and material interests, is not just Obama’s or America’s quandary, it is a species-wide dilemma that results from wanting to believe with Aristotle that we humans are fundamentally rational beings, when in fact recent advances in psychology and neuroscience strongly indicate that Enlightenment philosopher David Hume was right to say that “Reason is, and ought only to be, the slave of the passions.”

Models of rational behavior predict many of society’s patterns, such as favored strategies for maximizing profit or likelihood for criminal behavior in terms of “opportunity costs.” But seemingly irrational behaviors like war — in which the measurable costs often far outweigh the measurable benefits — have stumped thinkers for centuries. The prospect of crippling economic burdens and huge numbers of deaths doesn’t necessarily sway people from their positions on whether going to war is the right or wrong choice. One possible explanation is that people are not weighing the pros and cons for advancing material interests at all, but rather using a moral logic of “sacred values” — convictions that trump all other considerations — that cannot be quantified.

As Darwin noted in The Descent of Man, and Sun Tzu millennia before in The Art of War, the brave person is the one who is often intensely moral, undismayed by danger and demonstrably willing to kill and die for his beliefs. In the competition between groups of genetic strangers, such as empires and nations or transnational movements and ideologies, the society with greater bravery will win, all things being equal. Consider the American revolutionaries who, defying the greatest empire of the age, pledged “our lives, our fortunes, our sacred honor” in the cause of “Liberty or Death,” where the desired outcome was highly doubtful.

How many lives should a leader be willing to sacrifice to remove a murderous dictator like Muammar Gaddafi or Saddam Hussein? Most of the theories and models that researchers use to study conflicts like the Libyan or Iraq wars assume that civilians and leaders make a rational calculation: If the total cost of the war is less than the cost of the alternatives, they will support war. But recent studies by psychologist Jeremy Ginges and myself, carried out with the support of the National Science Foundation and the Defense Department, suggest those models are insufficient. Our surveys of people confronted with violent situations in the US, Middle East and Africa suggest that people consistently ignore quantifiable costs and benefits, relying instead on “sacred values.”

In one study, we asked 656 Israeli settlers in the West Bank about the dismantlement of their settlement as part of a peace agreement with Palestinians. Some subjects were asked about their willingness to engage in nonviolent protests, whereas others were asked about violence. Besides their willingness to violently resist eviction, the subjects rated how effective they thought the action would be and how morally right the decision was. If the settlers are making the decision rationally, in line with mainstream models, their willingness to engage in a particular form of protest should depend mostly on their estimation of its effectiveness. But if sacred values come into play, that calculus should be clouded.

When it came to nonviolent options such as picketing and blocking streets, the rational behavior model predicted settlers’ decisions. But in deciding whether to engage in violence, the settlers defied the rational behavior models. Rather than how effective they thought violence would be in saving their homes, the settlers’ willingness to engage in violent protest depended only on how morally correct they considered that option to be. We found similar patterns of “principled” resistance to peace settlements and support for violence, including suicide bombings, among Palestinian refugees who felt “sacred values” were at stake, such as the recognizing their moral right of return to homes in Israel even if they expressed no material or practical interest in actually resettling.

In a series of follow-up surveys among U.S. and Nigerian participants, we confronted subjects with hypothetical hostage situations and asked them if they would approve of a solution — which was either diplomatic or violent — for freeing the prisoners. The chance of success varied in terms of the number of hostages who might die. For example, in one version of the survey, when told that their action would result in all hostages being saved, both groups endorsed the plan presented to them. Told that one hostage would die, however, most “diplomats” became reluctant to endorse the proposed response. Those opting for military action had no such qualms. In fact, the most common response suggested that they would support military action even if 99 of 100 hostages died as a consequence.

These and other studies suggest that most societies have “sacred rules” for which their people would fight and risk serious loss and even die rather than compromise. If people perceive one such rule to have been violated, they may feel morally obliged to retaliate against the wrongdoers — even if the retaliation does more harm than good.

Ongoing neuroimaging studies by our research group led by Gregory Berns and his neuroeconomics team at Emory University indicates that sacred values are processed in those parts of the brains that deal with rule-governed behavior (rather than cost-benefit analyses), and are associated with greater emotional activity consistent with sentiments of “moral outrage” when participants perceive a violation of sacred values. In the hostage situation, the abductors were threatening to violate the sacred rule against killing innocent people. That rule was so strong for the participants that they felt morally obliged to meet violence with violence, regardless of the outcome. This is little different from Mr. Obama’s seemingly heartfelt sentiment that “as president, I refused to wait for the images of slaughter and mass graves before taking action.”

Every military strategist understands that even the most thought out military plan usually dissolves upon contact with the enemy, and that war generally carries a high measure of uncertainty and likelihood of “unintended consequences.” But even the decision to go to war is never just a product of reason and rational calculation, and thus never just “politics by other means,” despite what von Clausewitz famously stated in his classic study On War. This, the sentiment of a Prussian regimental officer in the post-Napoleonic era of state interests and strategies to rearrange “the balance of power,” disastrously misguided European elites into believing that wars could be started and pursued to a desired end by careful planning (while granting that in the fog of war events sometimes spin out of control). Many of our political and military leaders still believe in this Clausewitz delusion: it’s a mainstay in the curricula of U.S. war colleges and the international relations departments of top U.S. universities, and of most military and foreign affairs staffs in the world.

In truth, war is almost always an emotional matter of status and pride, of shedding blood and tearing the flesh of others held dear, of dread and awe and of the instinctual needs to escape from fear, to dominate and to avenge. But war is most profoundly an expression of that peculiar aspect of human nature that expresses our animal origins but which also distinguishes our species from all others that struggle and fight for survival: it defines “who we are” in the search for significance in an otherwise uncaring universe.

Unlike other creatures, humans define the groups to which they belong in abstract terms. Often they kill and die not in order to preserve their own lives or those of the people they love, but for the sake of an idea — the conception they have formed of themselves. Call it love of group or God, it matters little in the end. This is the “the privilege of absurdity; to which no living creature is subject, but man only’” of which Thomas Hobbes wrote in Leviathan. It is a human trait that likely will not change, and which political leaders must learn to manage — however inescapably murky — so that their people will endure in a world where an end to war is no more likely than unending day. But to insist that war make perfect rational sense, where means cost effectively lead to clear and practical political ends, may impose an inhuman task that any leader can only sidestep or fudge.”

Scott Atran, an American and French anthropologist at France’s National Center for Scientific Research, the University of Michigan and John Jay College, is the author of Talking to the Enemy, Why War Is Never Really Rational, Huffpost, March 29, 2011.

See also:

☞ Scott Atran, God and the Ivory Tower. What we don’t understand about religion just might kill us, Foreign Policy, Aug 6, 2012.
‘We’ vs ‘Others’: Russell Jacoby on why we should fear our neighbors more than strangers
Stephen M. Walt on What Does Social Science Tell Us about Intervention in Libya
The Psychology of Violence (a modern rethink of the psychology of shame and honour in preventing it), Lapidarium notes
The Philosophy of War, Internet Encyclopedia of Philosophy
☞ Jacek Żakowski, How to live with evil - dilemmas in politics (google translation)
Colman McCarthy, Teaching Peace, Hobart and William Smith Colleges, August 30, 2011
Steven Pinker on the History and decline of Violence
Violence tag on Lapidarium notes

Mar
28th
Mon
permalink

'We' vs 'Others': Russell Jacoby on why we should fear our neighbors more than strangers

         
                                         Titian, “Cain and Abel”, Venice

"Orientalism was ultimately a political vision of reality whose structure promoted the difference between the familiar (Europe, the West, ‘us’) and the strange (the Orient, the East, ‘them’)"Edward Said, Orientalism (1978)

"Academics are thrilled with the "other" and the vagaries of how we represent the foreign. By profession, anthropologists are visitors from afar. We are outsiders, writes an anthropologist, "seeking to understand unfamiliar cultures." Humanists and social theorists also have fallen in love with the "other." A recent paper by the literary critic Toril Moi is titled "Literature, Philosophy, and the Question of the Other." In a recent issue of Signs, a philosopher writes about “Occidental Dreams: Orientalism and History in ‘The Second Sex.’”

The romance with the “other,” the Orient, and the stranger, however, diverts attention from something less sexy: the familiar. For those concerned with strife and violence in the world, like Said, the latter may, in fact, be more critical than the strange and the foreign. If the Lebanese Civil War, which lasted 15 years, can highlight something about how the West represents the East, it can also foreground a neglected truth: The most decisive antagonisms and misunderstandings take place within a community. The history of hatred and violence is, to a surprising degree, a history of brother against brother, not brother against stranger. From Cain and Abel to the religious wars of the 16th and 17th centuries and the civil wars of our own age, it is not so often strangers who elicit hatred, but neighbors.

This observation contradicts both common sense and the collective wisdom of teachers and preachers, who declaim that we fear—sometimes for good reason—the unknown and dangerous stranger. Citizens and scholars alike believe that enemies lurk in the street and beyond the street, where we confront a “clash of civilizations” with foreigners who challenge our way of life.

The truth is more unsettling. From assault to genocide, from assassination to massacre, violence usually emerges from inside the fold rather than outside it. (…)

We may obsess about strangers piloting airplanes into our buildings, but in the United States in any year, roughly five times the number of those killed in the World Trade Center are murdered on the streets or inside their own homes and offices. These regular losses remind us that most criminal violence takes place between people who know each other. Cautious citizens may push for better street lighting, but they are much more likely to be assaulted, even killed, in the light of the kitchen by someone familiar than in a parking garage by a stranger. Like, not unlike, prompts violence.

Civil wars are generally more savage, and bear more lasting consequences, than wars between countries. Many more people died in the American Civil War—at a time when the population was a tenth of what it is today—than in any other American conflict, and its long-term effects probably surpass those of the others. Major bloodlettings of the 20th century—hundreds of thousands to millions of deaths—occurred in civil wars such as the Russian Civil War, the Chinese Civil Wars of 1927-37 and 1945-49, and the Spanish Civil War. More Russian lives were lost in the Russian Civil War that followed World War I than in the Great War itself, for instance.

But who cares about the Russian Civil War? A thousand books and courses dwell on World War I, but few on the Russian Civil War that emerged from it. That war, with its fluid battle lines, uncertain alliances, and clouded beginning, seems too murky. The stew of hostilities is typical of civil wars, however. With some notable exceptions, modern civil wars resist the clear categories of interstate wars. The edges are blurred. Revenge often trumps ideology and politics.

Yet civil strife increasingly characterizes the contemporary world. “Most wars are now civil wars,” announces the first sentence of a World Bank publication. Not only are there more civil wars, but they last longer. The conflicts in southern Sudan have been going on for decades. Lengthy battles between states are rare nowadays. And when states do attack, the fighting generally doesn’t last long (for example, Israel’s monthlong incursion into Lebanon in 2006). The recent wars waged by the United States in Iraq and Afghanistan are notable exceptions.

We live in an era of ethnic, national, and religious fratricide. A new two-volume reference work on “the most severe civil wars since World War II” has 41 entries, from Afghanistan and Algeria to Yemen and Zimbabwe. Over the last 50 years, the number of casualties of intrastate conflicts is roughly five times that of interstate wars. The number of refugees from these conflicts similarly dwarfs those from traditional state-versus-state wars. “Cases such as Afghanistan, Somalia, and Lebanon testify to the economic devastation that civil wars can produce,” note two political scientists. By the indexes of deaths, numbers of refugees, and extent of destruction, they conclude that "civil war has been a far greater scourge than interstate war" in recent decades. In Iraq today—putting aside blame and cause—more Iraqis are killed by their countrymen than by the American military.

"Not surprisingly, there is no treatise on civil war on the order of Carl von Clausewitz's On War,” writes the historian Arno Mayer, “civil wars being essentially wild and savage.”

The iconic book by Carl von Clausewitz, the Prussian military thinker, evokes the spirit of Immanuel Kant, whose writings he studied. Subheadings such as “The Knowledge in War Is Very Simple, but Not, at the Same Time, Very Easy” suggest its philosophical structure. Clausewitz subordinated war to policy, which entailed a rational evaluation of goals and methods. He compared the state to an individual. “Policy” is “the product of its brain,” and war is an option. “No one starts a war—or rather, no one in his senses ought to do so—without first being clear in his mind what he intends to achieve by that war and how he intends to conduct it.” If civilized nations at war “do not put their prisoners to death” or “devastate cities,” he writes, it is because “intelligence plays a larger part in their methods of warfare … than the crude expressions of instinct.”

In civil wars, by contrast, prisoners are put to death and cities destroyed as a matter of course. The ancient Greeks had already characterized civil strife as more violent than traditional war. Plato distinguishes war against outsiders from what he calls factionalized struggles, that is, civil wars. He posits that Greeks practice war against foreigners (“barbarians”), a conflict marked by “enmity and hatred,” but not against one another. When Greeks fight Greeks, he believes, they should temper their violence in anticipation of reconciliation. “They will not, being Greeks, ravage Greek territory nor burn habitations,” nor “lay waste the soil,” nor treat all “men, women, and children” as their enemies. Such, at least, was his hope in the Republic, but the real world often contradicted it, as he knew. His proposition that Greeks should not ravage Greeks challenged the reality in which Greeks did exactly that.

Plato did not have to look further than Thucydides' account of the Peloponnesian War to find confirmation of the brutality of Greek-on-Greek strife. In a passage often commented on, Thucydides wrote of the seesaw battle in Corcyra (Corfu) in 433 BC, which prefigured the larger war. When the Athenians approached the island in force, the faction they supported seized the occasion to settle accounts with its adversaries. In Thucydides’ telling, this was a “savage” civil war of Corcyrean against Corcyrean. For the seven days the Athenians stayed in the harbor, Corcyreans “continued to massacre those of their own citizens” they considered enemies. “There was death in every shape and form,” writes Thucydides. “People went to every extreme and beyond it. There were fathers who killed their sons; men were dragged from the temples or butchered on the very altars.” Families turned on families. “Blood ties became more foreign than factional ones.” Loyalty to the faction overrode loyalty to family members, who became the enemy.

Nearly 2,500 years after Thucydides, the presiding judge at a United Nations trial invoked the Greek historian. The judge reflected on what had occurred in the former Yugoslavia. One Duško Tadić stood accused of the torture and murder of Muslims in his hometown in Bosnia-Herzegovina. His actions exemplified a war of ethnic cleansing fueled by resentment and hatred. “Some time ago, yet not far from where the events in this case happened,” something similar occurred, stated a judge in his 1999 opinion. He cited Thucydides’ description of the Corcyrean civil war as one of “savage and pitiless actions.” Then as today, the judge reminded us, men “were swept away into an internecine struggle” in which vengeance supplanted justice.

Today’s principal global conflicts are fratricidal struggles—regional, ethnic, and religious: Iraqi Sunni vs. Iraqi Shiite, Rwandan Tutsi vs. Rwandan Hutu, Bosnian Muslim vs. Balkan Christians, Sudanese southerners vs. Sudanese northerners, perhaps Libyan vs. Libyan. As a Rwandan minister declared about the genocide in which Hutus slaughtered Tutsis: “Your neighbors killed you.” A reporter in northeastern Congo wrote that in seven months of fighting there, several thousand people were killed and more than 100,000 driven from their homes. He commented, "Like ethnic conflicts around the globe, this is fundamentally a fight between brothers: The two tribes—the Hema and the Lendu—speak the same language, marry each other, and compete for the same remote and thickly populated land.”

Somalia is perhaps the signal example of this ubiquitous fratricidal strife. As a Somalian-American professor observed, Somalia can claim a “homogeneity rarely known elsewhere in Africa.” The Somalian people “share a common language (Somali), a religion (Islam), physical characteristics, and pastoral and agropastoral customs and traditions.” This has not tempered violence. On the contrary.

The proposition that violence derives from kith and kin overturns a core liberal belief that we assault and are assaulted by those who are strangers to us. If that were so, the solution would be at hand: Get to know the stranger. Talk with the stranger. Reach out. The cure for violence is better communication, perhaps better education. Study foreign cultures and peoples. Unfortunately, however, our brother, our neighbor, enrages us precisely because we understand him. Cain knew his brother—he “talked with Abel his brother”—and slew him afterward.

We don’t like this truth. We prefer to fear strangers. We like to believe that fundamental differences pit people against one another, that world hostilities are driven by antagonistic principles about how society should be constituted. To think that scale—economic deprivation, for instance—rather than substance divides the world seems to trivialize the stakes. We opt instead for a scenario of clashing civilizations, such as the hostility between Western and Islamic cultures. The notion of colliding worlds is more appealing than the opposite: conflicts hinging on small differences. A “clash” implies that fundamental principles about human rights and life are at risk.

Samuel Huntington took the phrase “clash of civilizations" from the Princeton University historian Bernard Lewis, who was referring to a threat from the Islamic world. “We are facing a mood and a movement far transcending the level of issues and policies,” Lewis wrote in 1990. “This is no less than a clash of civilizations” and a challenge to “our Judeo-Christian heritage.” For Huntington, “the underlying problem for the West is not Islamic fundamentalism. It is Islam, a different civilization.” (…)

Or consider the words of a Hindu nationalist who addressed the conflict with Indian Muslims. How is unity to come about, she asks? “The Hindu faces this way, the Muslim the other. The Hindu writes from left to right, the Muslim from right to left. The Hindu prays to the rising sun, the Muslim faces the setting sun when praying. If the Hindu eats with the right hand, the Muslim with the left. … The Hindu worships the cow, the Muslim attains paradise by eating beef. The Hindu keeps a mustache, the Muslim always shaves the upper lip.”

Yet the preachers, porte-paroles, and proselytizers may mislead; it is in their interest to do so. What divided the Protestants and Catholics in 16th-century France, the Germans and Jews in 20th-century Europe, and the Shia and Sunni today may be small, not large. But minor differences rankle more than large differences. Indeed, in today’s world, it may be not so much differences but their diminution that provokes antagonism. Here it can be useful to attend the literary critic René Girard, who also bucks conventional wisdom by signaling the danger in similitude, not difference: “In human relationships, words like ‘sameness and ‘similarity evoke an image of harmony. If we have the same tastes and like the same things, surely we are bound to get along. But what will happen when we share the same desires?” However, for Girard, “a single principle” pervades religion and literature. “Order, peace, and fecundity depend on cultural distinctions; it is not these distinctions but the loss of them that gives birth to fierce rivalries and sets members of the same family or social group at one another’s throats.”

Likeness does not necessarily lead to harmony. It may elicit jealousy and anger. Inasmuch as identity rests on what makes an individual unique, similitude threatens the self. The mechanism also operates on social terrain. As cultural groups get absorbed into larger or stronger collectives, they become more anxious—and more prone to defend their dwindling identity. French Canadians—living as they do amid an ocean of English speakers—are more testy about their language than the French in France. Language, however, is just one feature of cultural identification.

Assimilation becomes a threat, not a promise. It spells homogenization, not diversity. The assimilated express bitterness as they register the loss of an identity they wish to retain. Their ambivalence transforms their anger into resentment. They desire what they reject and are consequently unhappy with themselves as well as their interlocutor. Resentment feeds protest and sometimes violence. Insofar as the extreme Islamists sense their world imitating the West, they respond with increased enmity. It is not so much the “other” as it is the absence of otherness that spurs anger. They fear losing themselves by mimicking the West. A Miss World beauty pageant in Nigeria spurred widespread riots by Muslims that left hundreds dead. This could be considered a violent rejection of imitation.

We hate the neighbor we are enjoined to love. Why? Why do small disparities between people provoke greater hatred than the large ones? Perhaps the work of Freud helps chart the underground sources of fratricidal violence. Freud introduced the phrase the narcissism of minor differences" to describe this phenomenon. He noted that "it is precisely the little dissimilarities in persons who are otherwise alike that arouse feelings of strangeness and enmity between them.

Freud first broached the narcissism of minor differences in The Taboo of Virginity,” an essay in which he also took up the “dread of woman.” Is it possible that these two notions are linked? That the narcissism of minor differences, the instigator of enmity, arises from differences between the sexes and, more exactly, man’s fear of woman? What do men fear? “Perhaps,” Freud hazards, the dread is “founded on the difference of woman from man.” More precisely, “man fears that his strength will be taken from him by woman, dreads becoming infected with her femininity” and that he will show himself to be a “weakling.” Might this be a root of violence, man’s fear of being unmanned?

The sources of hatred and violence are many, not singular. There is room for the findings of biologists, sociobiologists, and other scientists. For too long, however, social and literary scholars have dwelled on the “other” and its representation. It is interesting, even uplifting, to talk about how we see and don’t see the stranger. It is less pleasant, however, to tackle the divisiveness and rancor of countrymen and kin. We still have not caught up to Montaigne, with his famous remarks about Brazilian cannibals. He reminded his 16th-century readers not only that the mutual slaughter of Huguenots and Catholics eclipsed the violence of New World denizens—it was enacted on the living, and not on the dead—but that its agents were “our fellow citizens and neighbors.”

Russell Jacoby, professor of history at the University of California Los Angeles (UCLA). This essay is adapted from his book Bloodlust: On the Roots of Violence From Cain and Abel to the Present, Bloodlust. Why we should fear our neighbors more than strangers, The Chronicle Review, March 27, 2011.

See also:

Roger Dale Petersen, Understanding Ethnic Violence: fear, hatred, and resentment in twentieth-century Eastern Europe, Cambridge University Press, 2002.
Stephen M. Walt on What Does Social Science Tell Us about Intervention in Libya
Scott Atran on Why War Is Never Really Rational
Steven Pinker on the History and decline of Violence
Violence tag on Lapidarium notes