Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Nov
1st
Fri
permalink

Bill Gates: ‘If you think connectivity is the key thing, that’s great. I don’t. The world is not flat and PCs are not, in the hierarchy of human needs’


The internet is not going to save the world, whatever Mark Zuckerberg and Silicon Valley’s tech billionaires believe. (…) But eradicating disease just might.

Bill Gates describes himself as a technocrat. But he does not believe that technology will save the world. Or, to be more precise, he does not believe it can solve a tangle of entrenched and interrelated problems that afflict humanity’s most vulnerable: the spread of diseases in the developing world and the poverty, lack of opportunity and despair they engender. “I certainly love the IT thing,” he says. “But when we want to improve lives, you’ve got to deal with more basic things like child survival, child nutrition.

These days, it seems that every West Coast billionaire has a vision for how technology can make the world a better place. A central part of this new consensus is that the internet is an inevitable force for social and economic improvement; that connectivity is a social good in itself. It was a view that recently led Mark Zuckerberg to outline a plan for getting the world’s unconnected 5 billion people online, an effort the Facebook boss called “one of the greatest challenges of our generation”. But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.” (…)

Gates says now. “The world is not flat and PCs are not, in the hierarchy of human needs, in the first five rungs.” (…)

To Diamandis’s argument that there is more good to be done in the world by building new industries than by giving away money, meanwhile, he has a brisk retort: “Industries are only valuable to the degree they meet human needs. There’s not some – at least in my psyche – this notion of, oh, we need new industries. We need children not to die, we need people to have an opportunity to get a good education.” (…)

Gates describes himself as a natural optimist. But he admits that the fight with the US government seriously challenged his belief that the best outcome would always prevail. With a typically generalising sweep across history, he declares that governments have “worked pretty well on balance in playing their role to improve the human condition” and that in the US since 1776, “the government’s played an absolutely central role and something wonderful has happened”. But that doesn’t settle his unease.

“The closer you get to it and see how the sausage is made, the more you go, oh my God! These guys don’t even actually know the budget. It makes you think: can complex, technocratically deep things – like running a healthcare system properly in the US in terms of impact and cost – can that get done? It hangs in the balance.”

It isn’t just governments that may be unequal to the task. On this analysis, the democratic process in most countries is also straining to cope with the problems thrown up by the modern world, placing responsibilities on voters that they can hardly be expected to fulfil. “The idea that all these people are going to vote and have an opinion about subjects that are increasingly complex – where what seems, you might think … the easy answer [is] not the real answer. It’s a very interesting problem. Do democracies faced with these current problems do these things well?.”

An exclusive interview with Bill Gates, The Financial Times, Nov 1, 2013, Photo

Sep
29th
Sun
permalink

Kevin Kelly: The Improbable is the New Normal

"The improbable consists of more than just accidents. The internets are also brimming with improbable feats of performance — someone who can run up a side of a building, or slide down suburban roof tops, or stack up cups faster than you can blink. Not just humans, but pets open doors, ride scooters, and paint pictures. The improbable also includes extraordinary levels of super human achievements: people doing astonishing memory tasks, or imitating all the accents of the world. In these extreme feats we see the super in humans.

Every minute a new impossible thing is uploaded to the internet and that improbable event becomes just one of hundreds of extraordinary events that we’ll see or hear about today. The internet is like a lens which focuses the extraordinary into a beam, and that beam has become our illumination. It compresses the unlikely into a small viewable band of everyday-ness. As long as we are online - which is almost all day many days — we are illuminated by this compressed extraordinariness. It is the new normal.

That light of super-ness changes us. We no longer want mere presentations, we want the best, greatest, the most extraordinary presenters alive, as in TED. We don’t want to watch people playing games, we want to watch the highlights of the highlights, the most amazing moves, catches, runs, shots, and kicks, each one more remarkable and improbable than the other.

We are also exposed to the greatest range of human experience, the heaviest person, shortest midgets, longest mustache — the entire universe of superlatives! Superlatives were once rare — by definition — but now we see multiple videos of superlatives all day long, and they seem normal. Humans have always treasured drawings and photos of the weird extremes of humanity (early National Geographics), but there is an intimacy about watching these extremities on video on our phones while we wait at the dentist. They are now much realer, and they fill our heads.

I see no end to this dynamic. Cameras are becoming ubiquitous, so as our collective recorded life expands, we’ll accumulate thousands of videos showing people being struck by lightening. When we all wear tiny cameras all the time, then the most improbable accident, the most superlative achievement, the most extreme actions of anyone alive will be recorded and shared around the world in real time. Soon only the most extraordinary moments of our 6 billion citizens will fill our streams. So henceforth rather than be surrounded by ordinariness we’ll float in extraordinariness. (…)

When the improbable dominates the archive to the point that it seems as if the library contains ONLY the impossible, then these improbabilities don’t feel as improbable. (…)

To the uninformed, the increased prevalence of improbable events will make it easier to believe in impossible things. A steady diet of coincidences makes it easy to believe they are more than just coincidences, right? But to the informed, a slew of improbably events make it clear that the unlikely sequence, the outlier, the black swan event, must be part of the story. After all, in 100 flips of the penny you are just as likely to get 100 heads in a row as any other sequence. But in both cases, when improbable events dominate our view — when we see an internet river streaming nothing but 100 heads in a row — it makes the improbable more intimate, nearer.

I am unsure of what this intimacy with the improbable does to us. What happens if we spend all day exposed to the extremes of life, to a steady stream of the most improbable events, and try to run ordinary lives in a background hum of superlatives? What happens when the extraordinary becomes ordinary?

The good news may be that it cultivates in us an expanded sense of what is possible for humans, and for human life, and so expand us. The bad news may be that this insatiable appetite for supe-superlatives leads to dissatisfaction with anything ordinary.”

Kevin Kelly, is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Improbable is the New Normal, The Technium, 7 Jan, 2013. (Photo source)

Apr
27th
Sat
permalink

The Rise of Big Data. How It’s Changing the Way We Think About the World

       image

"In the third century BC, the Library of Alexandria was believed to house the sum of human knowledge. Today, there is enough information in the world to give every person alive 320 times as much of it as historians think was stored in Alexandria’s entire collection — an estimated 1,200 exabytes’ worth. If all this information were placed on CDs and they were stacked up, the CDs would form five separate piles that would all reach to the moon. (…)

Using big data will sometimes mean forgoting the quest for why in return for knowing what. (…)

There will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity. (…)

Datafication is not the same as digitization, which takes analog content — books, films, photographs — and converts it into digital information, a sequence of ones and zeros that computers can read. Datafication is a far broader activity: taking all aspects of life and turning them into data. Google’s augmented-reality glasses datafy the gaze. Twitter datafies stray thoughts. LinkedIn datafies professional networks.

Once we datafy things, we can transform their purpose and turn the information into new forms of value. For example, IBM was granted a U.S. patent in 2012 for “securing premises using surface-based computing technology” — a technical way of describing a touch-sensitive floor covering, somewhat like a giant smartphone screen. Datafying the floor can open up all kinds of possibilities. The floor could be able to identify the objects on it, so that it might know to turn on lights in a room or open doors when a person entered. Moreover, it might identify individuals by their weight or by the way they stand and walk. (…)

This misplaced trust in data can come back to bite. Organizations can be beguiled by data’s false charms and endow more meaning to the numbers than they deserve. That is one of the lessons of the Vietnam War. U.S. Secretary of Defense Robert McNamara became obsessed with using statistics as a way to measure the war’s progress. He and his colleagues fixated on the number of enemy fighters killed. Relied on by commanders and published daily in newspapers, the body count became the data point that defined an era. To the war’s supporters, it was proof of progress; to critics, it was evidence of the war’s immorality. Yet the statistics revealed very little about the complex reality of the conflict. The figures were frequently inaccurate and were of little value as a way to measure success. Although it is important to learn from data to improve lives, common sense must be permitted to override the spreadsheets. (…)

Ultimately, big data marks the moment when the “information society” finally fulfills the promise implied by its name. The data take center stage. All those digital bits that have been gathered can now be harnessed in novel ways to serve new purposes and unlock new forms of value. But this requires a new way of thinking and will challenge institutions and identities. In a world where data shape decisions more and more, what purpose will remain for people, or for intuition, or for going against the facts? If everyone appeals to the data and harnesses big-data tools, perhaps what will become the central point of differentiation is unpredictability: the human element of instinct, risk taking, accidents, and even error. If so, then there will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity to ensure that they are not crowded out by data and machine-made answers.

This has important implications for the notion of progress in society. Big data enables us to experiment faster and explore more leads. These advantages should produce more innovation. But at times, the spark of invention becomes what the data do not say. That is something that no amount of data can ever confirm or corroborate, since it has yet to exist. If Henry Ford had queried big-data algorithms to discover what his customers wanted, they would have come back with “a faster horse,” to recast his famous line. In a world of big data, it is the most human traits that will need to be fostered — creativity, intuition, and intellectual ambition — since human ingenuity is the source of progress.

Big data is a resource and a tool. It is meant to inform, rather than explain; it points toward understanding, but it can still lead to misunderstanding, depending on how well it is wielded. And however dazzling the power of big data appears, its seductive glimmer must never blind us to its inherent imperfections. Rather, we must adopt this technology with an appreciation not just of its power but also of its limitations.”

Kenneth Neil Cukier and Viktor Mayer-Schoenberger, The Rise of Big Data, Foreign Affairs, May/June 2013. (Photo: John Elk)

See also:

Dirk Helbing on A New Kind Of Socio-inspired Technology
Information tag on Lapidarium notes

Feb
20th
Wed
permalink

Albert Bandura on social learning, the origins of morality, and the impact of technological change on human nature

image

"Technology has changed the speed and the scope of social influence and has really transformed our realities. Social cognitive theory is very compatible with that. Other learning theories were linked to learning by direct experience, but when I look around today, I see that most of our learning is by social modeling and through indirect experiences. Errors can be very costly and you can’t afford to develop our values, our competences, our political systems, our religious systems through trial and error. Modeling shortcuts this process. (…)

With new technologies, we’re essentially transcending our physical environment and more and more of our values and attitudes and behavior are now shaped in the symbolic environment – the symbolic environment is the big one rather than the actual one. The changes are so rapid that there are more and more areas of life now in which the cyber world is really essential. One model can affect millions of people worldwide, it can shape their experiences and behaviors. We don’t have to rely on trial and error.

There’s a new challenge now: When I was growing up, we didn’t have all this technology, so we were heavily involved in personal relationships. Now the cyber world is available, and it’s hard to maintain a balance in the priorities of life. (…)

The internet can provide you with fantastic globalized information – but the problem is this: It undermines our ability for self-regulation or self-management. The first way to undermine productivity is temporizing, namely we’re going to put off what we need to do until tomorrow, when we have the illusion that we’ll have more time. So we’re dragging the stuff with us. But the really big way is detouring, and wireless devices are now giving an infinite detour. They create the illusion of business. I talked to the author of a beststeller and I asked him about his writing style. He said: ‘Well, I have to check my e-mails and then I get down to serious writing, but then I get back to the e-mails.’ The challenge of the cyber world is establishing a balance between our digital life and life in the real world. (…)

The origins of morality

Originally our behavior was pretty much shaped by control, by the external consequences of our lives. So the question is: How did we acquire some standards? There are about three or four ways. One: We evaluate reactions to our behavior. We behave in certain ways, in good ways, in bad ways, and then we receive feedback. We begin to adopt standards from how the social environment reacts to our behavior. Two: We see others behaving in certain ways and we are either self-critical or self-approving. Three: We have precepts that tell us what is good and bad. And once we have certain self-sanctions, we have two other potent factors that can influence our behavior: People will behave in certain ways because they want to avoid legal sanctions to their behavior or the social sanctions in their environment. (…)

Many of our theories of morality are abstract. But the primary concern about the acquisition of morality and about the modes of moral reasoning is only one half of the story, the less interesting half. We adopt standards, but we have about eight mechanisms by which we selectively disengage from those standards. So the challenge to explain is not why do people behave in accordance with these standards, but how is it that people can behave cruelly and still feel good about themselves. Our problem is good people doing bad things – and not evil people doing bad things. (…)

Everyday people can behave very badly. In the book I’m writing on that topic I have a long chapter on moralist disengagement in the media, in the gun industry, in the tobacco industry, in the corporate world, in the finance industry – there’s fantastic data from the last few years – in terrorism and as an impediment to environmental sustainability. That’s probably the most important area of moralist disengagement. We have about forty or fifty years, and if we don’t get our act together, we’ll have a very hard time. It’s going to be awfully crowded on earth and a good part of our cities will be under water. And what are we doing? We don’t have the luxury of time anymore. (…)

Human nature is capable of vindicating behavior. It isn’t that people are bad by nature. But they have a very playful and rewarding lifestyle, filled with gadgets and air conditioning, and they don’t want to give it up. (…)

Q: ‘The story of men is a story about violence, love, power, victory and defeat’ – that’s how poets talk about the course of history. But from an analystic point of view…

A. Bandura: That’s not true for all societies. We assume that aggression is inbred, but some societies are remarkably pacifistic. And we can also see large variations within a society. But the most striking example might be the transformation from warrior societies into peaceful societies. Switzerland is one example. Sweden is another: Those vikings were out mugging everyone and people would pray for protection: “Save our souls from the fury of the Norsemen!” And now, if you look at that society, it’s hard to find child abuse or domestic violence. Sweden has become a mediator of peace.

Q: In German, there’s the term “Schicksalsgemeinschaft,” which translates as “community of fate”: It posits that a nation is bound together by history. Do you think that’s what defines a society: A common history? Or is it religion, or the language we speak?

A. Bandura: All of the above. We put a lot of emphasis on biological evolution, but what we don’t emphasize is that cultures evolve, too. These changes are transmitted from one generation to another. A few decades ago, the role of women was to be housewives and it was considered sinful to co-habit without being married. If you look at the role of women today, there’s a fantastic transformation in a short period of time; change is accelerated. Homogenization is important, picking things from different cultures, cuisines, music traditions, forms of behavior, and so on. But we have also polarization: Bin Laden’s hate of the West, for example. And there’s hybridization as well. (…)

And society is changing, too. Now it’s considered completely normal to live with your partner without being married. In California, it was only about 40 years ago that homosexuality was treated as a disease. Then people protested, and eventually they got the state to change the diagnostic category to sexual orientation rather than a disease. Psychiatry, under public pressure, changed the diagnostic system. (…)

Q: It’s quite interesting to compare Russia and China. Russia has a free internet, so the reaction to protests is very different than in China. If social networks become increasingly global, do you foresee something like a global set of values as well?

A. Bandura: Yes, but there is another factor here, namely the tremendous power of multinational corporations. They now shape global culture. A lot of these global forces are undermining the collective African society, for example. The society does no longer have much control over the economy. In order to restore some power in leverage, societies are going to be organized in unions. We will see more partnerships around the world. (…)

The revolutionary tendency of technology has increased our sense of agency. If I have access to all global knowledge, I would have fantastic capacities to educate myself. (…) The important thing in psychology is that we need a theory of human agency, rather than arguing that we’re controlled by neural networks. In every aspect of our lives we now have a greater capacity for exercicing agency. (…)

Q: But at the same time globalization removes us from the forces that shape our environment.

A. Bandura: The problems are powerful transnational forces. They can undermine the capacity to run our own society: Because of what happens in Iran, gas prices might soon hit five dollars per gallon in the US. That’s where the pressure comes from for systems and societies to form blocks or build up leverage to protect the quality of life of their citizens. But we can see that a global culture is emerging. One example is the transformation of the status of women. Oppressive regimes see that women are able to drive cars, and they cannot continue to deny that right to them. We’re really changing norms. Thanks to the ubiquity of television, we’re motivating them and showing them that they have the capability to initiate change. It’s about agency: Change is deeply rooted in the belief that my actions can have an effect in the world.”

Albert Bandura, a psychologist who is the David Starr Jordan Professor Emeritus of Social Science in Psychology at Stanford University. For almost six decades, he has been responsible for contributions to many fields of psychology, including social cognitive theory, therapy and personality psychology, and was also influential in the transition between behaviorism and cognitive psychology, "We have transcended our biology, The European, 18.02.2013. (Photo: Linda A. Cicero / Stanford News Service)

See also:

‘Human beings are learning machines,’ says philosopher (nature vs. nurture), Lapidarium notes
What Neuroscience Tells Us About Morality: ‘Morality is a form of decision-making, and is based on emotions, not logic’

Aug
10th
Fri
permalink

God and the Ivory Tower. What we don’t understand about religion just might kill us

                   
(Illustration: Medieval miniature painting of the Siege of Antioch (1490). The Crusades were a series of a military campaigns fought mainly between Christian Europe and Muslims. Shown here is a battle scene from the First Crusade.)

"The era of world struggle between the great secular ideological -isms that began with the French Revolution and lasted through the Cold War (republicanism, anarchism, socialism, fascism, communism, liberalism) is passing on to a religious stage. Across the Middle East and North Africa, religious movements are gaining social and political ground, with election victories by avowedly Islamic parties in Turkey, Palestine, Egypt, Tunisia, and Morocco. As Israel’s National Security Council chief, Gen. Yaakov Amidror (a religious man himself), told me on the eve of Tunisia’s elections last October, “We expect Islamist parties to soon dominate all governments in the region, from Afghanistan to Morocco, except for Israel.”

On a global scale, Protestant evangelical churches (together with Pentacostalists) continue to proliferate, especially in Latin America, but also keep pace with the expansion of fundamentalist Islam in southern Africa and eastern and southern Asia. In Russia, a clear majority of the population remains religious despite decades of forcibly imposed atheism. Even in China, where the government’s commission on atheism has the Sisyphean job of making that country religion-free, religious agitation is on the rise. And in the United States, a majority says it wants less religion in politics, but an equal majority still will not vote for an atheist as president.

But if reams of social scientific analysis have been produced on religion’s less celestial cousins — from the nature of perception and speech to how we rationalize and shop — faith is not a matter that rigorous science has taken seriously. To be sure, social scientists have long studied how religious practices correlate with a wide range of economic, social, and political issues. Yet, for nearly a century after Harvard University psychologist William James’s 1902 masterwork, The Varieties of Religious Experience, there was little serious investigation of the psychological structure or neurological and biological underpinnings of religious belief that determine how religion actually causes behavior. And that’s a problem if science aims to produce knowledge that improves the human condition, including a lessening of cultural conflict and war.

Religion molds a nation in which it thrives, sometimes producing solidarity and sacred causes so powerful that citizens are willing to kill or die for a common good (as when Judea’s Jews around the time of Christ persisted in rebellion unto political annihilation in the face of the Roman Empire’s overwhelmingly military might). But religion can also hinder a society’s ability to work out differences with others, especially if those others don’t understand what religion is all about. That’s the mess we find ourselves in today, not only among different groups of Americans in the so-called culture wars, but between secular and Judeo-Christian America and many Muslim countries.

Time and again, countries go to war without understanding the transcendent drives and dreams of adversaries who see a very different world. Yet we needn’t fly blindly into the storm.

Science can help us understand religion and the sacred just as it can help us understand the genome or the structure of the universe. This, in turn, can make policy better informed.

Fortunately, the last few years show progress in scientific studies of religion and the sacred, though headwinds remain strong. Across history and cultures, religion has often knit communities together under the rule of sentient, but immaterial deities — that is, spiritual beings whose description is logically contradictory and empirically unfalsifiable. Cross-cultural studies pioneered by anthropologist Pascal Boyer show that these miraculous features — talking bushes, horses that leap into the sky — make lasting impressions on people and thereby increase the likelihood that they will be passed down to the next generation. Implausibility also facilitates cultural transmission in a more subtle manner — fostering adaptability of religious beliefs by opening the door to multiple interpretations (as with metaphors or weekly sermons).

And the greater the investment in outlandishness, the better. This is because adherence to apparently absurd beliefs means incurring costs — surviving without electricity, for example, if you are Amish — which help identify members who are committed to the survival of a group and cannot be lured away. The ease of identifying true believers, in turn, builds trust and galvanizes group solidarity for common defense.

To test this hypothesis, anthropologist Richard Sosis and his colleagues studied 200 communes founded in the United States in the 19th century. If shared religious beliefs really did foster loyalty, they reasoned, then communes formed out of religious conviction should survive longer than those motivated by secular ideologies such as socialism. Their findings were striking: Just 6 percent of the secular communes were still functioning 20 years after their founding, compared with 39 percent of the religious communes.

It is not difficult to see why groups formed for purely rational reasons can be more vulnerable to collapse: Background conditions change, and it might make sense to abandon one group in favor of another. Interestingly, recent research echoes the findings of 14th-century historian Ibn Khaldun, who argued that long-term differences among North African Muslim dynasties with comparable military might “have their origin in religion … [and] group feeling [wherein] mutual cooperation and support flourish.” The more religious societies, he argued, endured the longest.

For this reason, even ostensibly secular countries and transnational movements usually contain important quasi-religious rituals and beliefs. Think of sacred songs and ceremonies, or postulations that “providence” or “nature” bestows equality and inalienable rights (though, for about 99.9 percent of our species’ existence, slavery, and oppression of minorities were more standard fare). These sacred values act as moral imperatives that inspire nonrational sacrifices in cooperative endeavors such as war.

Insurgents, revolutionaries, and terrorists all make use of this logic, generating outsized commitment that allows them to resist and often prevail against materially stronger foes. Consider the American revolutionaries who defied the greatest empire of their age by pledging “our Lives, our Fortunes and our sacred Honor” for the cause of “liberty or death.” Surely they were aware of how unlikely they were to succeed, given the vast disparities in material resources, manpower, and training. As Osama Hamdan, the ranking Hamas politburo member for external affairs, put it to me in Damascus, Syria, “George Washington was fighting the strongest military in the world, beyond all reason. That’s what we’re doing. Exactly.”

But the same logic that makes religious and sacred beliefs more likely to endure can make them impervious to compromise. Based on interviews, experiments, and surveys with Palestinians, Israelis, Indonesians, Indians, Afghans, and Iranians, my research with psychologists Jeremy Ginges, Douglas Medin, and others demonstrates that offering people material incentives (large amounts of money, guarantees for a life free of political violence) to compromise sacred values can backfire, increasing stated willingness to use violence. Such backfire effects occur both for convictions with clear religious investment (Jerusalem, sharia law) and for those that are at least initially nonreligious (Iran’s right to a nuclear capability, Palestinian refugees’ right of return).

According to a 2010 study, for example, most Iranians think there is nothing sacred about their government’s nuclear program. But for a sizable minority — 13 percent of the population — the quest for a nuclear capability (more focused on energy than weapons) had, through religious rhetoric, become a sacred subject. This group, which tends to be close to the regime, now believes a nuclear program is bound up with national identity and with Islam itself. As a result, offering material rewards or punishments to abandon the program only increases anger and support for it.

Although this sacralization of initially secular issues confounds standard “business-like” negotiation tactics, my work with political scientist Robert Axelrod interviewing political leaders in the Middle East and elsewhere indicates that strong symbolic gestures (sincere apologies, demonstrating respect for the other’s values) generate surprising flexibility, even among militants, and may enable subsequent material negotiations. Thus, we find that Palestinian leaders and their supporting populations are generally willing to accept Israeli offers of economic improvement only after issues of recognition are addressed. Even purely symbolic statements accompanied by no material action, such as “we recognize your suffering” or “we respect your rights in Jerusalem,” diminish support for violence, including suicide terrorism. This is particularly promising because symbolic gestures tied to religious notions that are open to interpretation might potentially be reframed without compromising their absolute “truth.” For example, Jerusalem might be reconceived less as a place than portal to heaven, where earthly access to the portal suffices.

If these things are worth knowing, why do scientists still shun religion?

Part of the reason is that most scientists are staunchly nonreligious. If you look at the prestigious U.S. National Academy of Sciences or Britain’s Royal Society, well over 90 percent of members are non-religious. That may help explain why some of the bestselling books by scientists about religion aren’t about the science of religion as much as the reasons that it’s no longer necessary to believe. “New Atheists” have aggressively sought to discredit religion as the chief cause of much human misery, militating for its demise. They contend that science has now answered questions about humans’ origins and place in the world that only religion sought to answer in the days before evolutionary science, and that humankind no longer needs the broken crutch of faith.

But the idea that we can simply argue away religion has little factual support. Although a recent study by psychologists Will Gervais and Ara Norenzayan indicates that people are less prone to think religiously when they think analytically, other studies suggest that seemingly contrary evidence rarely undermines religious belief, especially among groups welded by ritualized sacrifice in the face of outside threats. Norenzayan and others also find that belief in gods and miracles intensifies when people are primed with awareness of death or when facing danger, as in wartime.

Moreover, the chief complaint against religion — that it is history’s prime instigator of intergroup conflict — does not withstand scrutiny. Religious issues motivate only a small minority of recorded wars. The Encyclopedia of Wars surveyed 1,763 violent conflicts across history; only 123 (7 percent) were religious. A BBC-sponsored “God and War" audit, which evaluated major conflicts over 3,500 years and rated them on a 0-to-5 scale for religious motivation (Punic Wars = 0, Crusades = 5), found that more than 60 percent had no religious motivation. Less than 7 percent earned a rating greater than 3. There was little religious motivation for the internecine Russian and Chinese conflicts or the world wars responsible for history’s most lethal century of international bloodshed.

Indeed, inclusive concepts such as “humanity” arguably emerged with the rise of universal religions. Sociologist Rodney Stark reveals that early Christianity became the Roman Empire’s majority religion not through conquest, but through a social process grounded in trust. Repeated acts of altruism, such as caring for non-Christians during epidemics, facilitated the expansion of social networks that were invested in the religion. Likewise, studies by behavioral economist Joseph Henrich and colleagues on contemporary foragers, farmers, and herders show that professing a world religion is correlated with greater fairness toward passing strangers. This research helps explain what’s going on in sub-Saharan Africa, where Islam is spreading rapidly. In Rwanda, for example, people began converting to Islam in droves after Muslims systematically risked their lives to protect Christians and animists from genocide when few others cared.

Although surprisingly few wars are started by religions, once they start, religion — and the values it imposes — can play a critical role. When competing interests are framed in terms of religious and sacred values, conflict may persist for decades, even centuries. Disputes over otherwise mundane phenomena then become existential struggles, as when land becomes “Holy Land.” Secular issues become sacralized and nonnegotiable, regardless of material rewards or punishments. In a multiyear study, our research group found that Palestinian adolescents who perceived strong threats to their communities and were highly involved in religious ritual were most likely to see political issues, like the right of refugees to return to homes in Israel, as absolute moral imperatives. These individuals were thus opposed to compromise, regardless of the costs. It turns out there may be a neurological component to such behavior: Our work with Gregory Berns and his neuroeconomics team suggests that such values are processed in the brain as duties rather than utilitarian calculations; neuroimaging reveals that violations of sacred values trigger emotional responses consistent with sentiments of moral outrage.

Historical and experimental studies suggest that the more antagonistic a group’s neighborhood, the more tightly that group will cling to its sacred values and rituals. The result is enhanced solidarity, but also increased potential for conflict toward other groups. Investigation of 60 small-scale societies reveals that groups that experience the highest rates of conflict (warfare) endure the costliest rites (genital mutilation, scarification, etc.). Likewise, research in India, Mexico, Britain, Russia, and Indonesia indicates that greater participation in religious ritual in large-scale societies is associated with greater parochial altruism — that is, willingness to sacrifice for one’s own group, such as Muslims or Christians, but not for outsiders — and, in relevant contexts, support for suicide attacks. This dynamic is behind the paradoxical reality that the world finds itself in today: Modern global multiculturalism is increasingly challenged by fundamentalist movements aimed at reviving group loyalty through greater ritual commitments to ideological purity.

So why does it matter that we have moved past the -isms and into an era of greater religiosity? In an age where religious and sacred causes are resurgent, there is urgent need for scientific effort to understand them. Now that humankind has acquired through science the power to destroy itself with nuclear weapons, we cannot afford to let science ignore religion and the sacred, or let scientists simply try to reason them away. Policymakers should leverage scientific understanding of what makes religion so potent a force for both cooperation and conflict, to help increase the one and lessen the other.

Scott Atran, American and French anthropologist at France’s National Center for Scientific Research, the University of Michigan, John Jay College, and ARTIS Research who has studied violence and interviewed terrorists, God and the Ivory Tower, Foreign Policy, Aug 6, 2012.

See also:

Scott Atran on Why War Is Never Really Rational
‘We’ vs ‘Others’: Russell Jacoby on why we should fear our neighbors more than strangers
The Psychology of Violence (a modern rethink of the psychology of shame and honour in preventing it), Lapidarium notes
Religion tag on Lapidarium notes

Jul
24th
Tue
permalink

Dirk Helbing on A New Kind Of Socio-inspired Technology

The big unexplored continent in science is actually social science, so we really need to understand much better the principles that make our society work well, and socially interactive systems. Our future information society will be characterized by computers that behave like humans in many respects. In ten years from now, we will have computers as powerful as our brain, and that will really fundamentally change society. Many professional jobs will be done much better by computers. How will that change society? How will that change business? What impacts does that have for science, actually?

There are two big global trends. One is big data. That means in the next ten years we’ll produce as many data, or even more data than in the past 1,000 years. The other trend is hyperconnectivity. That means we have networking our world going on at a rapid pace; we’re creating an Internet of things. So everyone is talking to everyone else, and everything becomes interdependent. What are the implications of that? (…)

But on the other hand, it turns out that we are, at the same time, creating highways for disaster spreading. We see many extreme events, we see problems such as the flash crash, or also the financial crisis. That is related to the fact that we have interconnected everything. In some sense, we have created unstable systems. We can show that many of the global trends that we are seeing at the moment, like increasing connectivity, increase in the speed, increase in complexity, are very good in the beginning, but (and this is kind of surprising) there is a turning point and that turning point can turn into a tipping point that makes the systems shift in an unknown way.

It requires two things to understand our systems, which is social science and complexity science; social science because computers of tomorrow are basically creating artificial social systems. Just take financial trading today, it’s done by the most powerful computers. These computers are creating a view of the environment; in this case the financial world. They’re making projections into the future. They’re communicating with each other. They have really many features of humans. And that basically establishes an artificial society, which means also we may have all the problems that we are facing in society if we don’t design these systems well. The flash crash is just one of those examples that shows that, if many of those components — the computers in this case — interact with each other, then some surprising effects can happen. And in that case, $600 billion were actually evaporating within 20 minutes.

Of course, the markets recovered, but in some sense, as many solid stocks turned into penny stocks within minutes, it also changed the ownership structure of companies within just a few minutes. That is really a completely new dimension happening when we are building on these fully automated systems, and those social systems can show a breakdown of coordination, tragedies of the commons, crime or cyber war, all these kinds of things will happen if we don’t design them right.

We really need to understand those systems, not just their components. It’s not good enough to have wonderful gadgets like smartphones and computers; each of them working fine in separation. Their interaction is creating a completely new world, and it is very important to recognize that it’s not just a gradual change of our world; there is a sudden transition in the behavior of those systems, as the coupling strength exceeds a certain threshold.

A traffic flow in a circle

I’d like to demonstrate that for a system that you can easily imagine: traffic flow in a circle. Now, if the density is high enough, then the following will happen: after some time, although every driver is trying hard to go at a reasonable speed, cars will be stopped by a so-called ‘phantom traffic jam.’ That means smooth traffic flow will break down, no matter how hard the drivers will try to maintain speed. The question is, why is this happening? If you would ask drivers, they would say, “hey, there was a stupid driver in front of me who didn’t know how to drive!” Everybody would say that. But it turns out it’s a systemic instability that is creating this problem.

That means a small variation in the speed is amplified over time, and the next driver has to brake a little bit harder in order to compensate for a delayed reaction. That creates a chain reaction among drivers, which finally stops traffic flow. These kinds of cascading effects are all over the place in the network systems that we have created, like power grids, for example, or our financial markets. It’s not always as harmless as in traffic jams. We’re just losing time in traffic jams, so people could say, okay, it’s not a very serious problem. But if you think about crowds, for example, we have this transition towards a large density of the crowd, then what will happen is a crowd disaster. That means people will die, although nobody wants to harm anybody else. Things will just go out of control. Even though there might be hundreds or thousands of policemen or security forces trying to prevent these things from happening.

This is really a surprising behavior of these kinds of strongly-networked systems. The question is, what implication does that have for other network systems that we have created, such as the financial system? There is evidence that the fact that now every bank is interconnected with every other bank has destabilized the system. That means that there is a systemic instability in place that makes it so hard to control, or even impossible to control. We see that the big players, and also regulators, have large difficulties to get control of these systems.  

That tells us something that we need to change our perspective regarding these systems. Those complex systems are not characterized anymore by the properties of their components. But they’re characterized by what is the outcome of the interactions between those components. As a result of those interactions, self-organization is going on in these systems. New emergent properties come up. They can be very surprising, actually, and that means we cannot understand those systems anymore, based on what we see, which is the components.

We need to have new instruments and tools to understand these kinds of systems. Our intuition will not work here. And that is what we want to create: we want to come up with a new information platform for everybody that is bringing together big data with exa-scale computing, with people, and with crowd sourcing, basically connecting the intelligence of the brains of the world.

One component that is going to measure the state of the world is called the Planetary Nervous System. That will measure not just the physical state of the world and the environmental situation, but it is also very important actually that we learn how to measure social capital, such as trust and solidarity and punctuality and these kinds of things, because this is actually very important for economic value generation, but also for social well-being.

Those properties as social capital, like trust, they result from social network interactions. We’ve seen that one of the biggest problems of the financial crisis was this evaporation of trust. It has burned tens of thousands of billion dollars. If we would learn how to stabilize trust, or build trust, that would be worth a lot of money, really. Today, however, we’re not considering the value of social capital. It can happen that we destroyed it or that we exploit it, such as we’ve exploited and destroyed our environment. If we learn how much is the value of social capital, we will start to protect it. Also we’ll take it into account in our insurance policies. Because today, no insurance is taking into account the value of social capital. It’s the material damage that we take into account, but not the social capital. That means, in some sense, we’re underinsured. We’re taking bigger risks than we should.

This is something that we want to learn, how to quantify the fundaments of society, to quantify the social footprint. It means to quantify the implications of our decisions and actions.

The second component, the Living Earth Simulator will be very important here, because that will look at what-if scenarios. It will take those big data generated by the Planetary Nervous System and allow us to look at different scenarios, to explore the various options that we have, and the potential side effects or cascading effects, and unexpected behaviors, because those interdependencies make our global systems really hard to understand. In many cases, we just overlook what would happen if we fix a problem over here: It might have unwanted side effects; in many cases, that is happening in other parts of our world.

We are using supercomputers today in all areas of our development. Like if we are developing a car, a plane or medical tracks or so, supercomputers are being used, also in the financial world. But we don’t have a kind of political or a business flight simulator that helps us to explore different opportunities. I think this is what we can create as our understanding of society progresses. We now have much better ideas of how social coordination comes about, what are the preconditions for cooperation. What are conditions that create conflict, or crime, or war, or epidemicspreading, in the good and the bad sense?

We’re using, of course, viral marketing today in order to increase the success of our products. But at the same time, also we are suffering from a quick spreading of emerging diseases, or of computer viruses, and Trojan horses, and so on. We need to understand these kinds of phenomena, and with the data and the computer power that is coming up, it becomes within reach to actually get a much better picture of these things.

The third component will be the Global Participatory Platform [pdf]. That basically makes those other tools available for everybody: for business leaders, for political decision-makers, and for citizens. We want to create an open data and modeling platform that creates a new information ecosystem that allows you to create new businesses, to come up with large-scale cooperation much more easily, and to lower the barriers for social, political and economic participation.

So these are the three big elements. We’ll furthermore  build exploratories of society, of the economy and environment and technology, in order to be able to anticipate possible crises, but also to see opportunities that are coming up. Those exploratories will bring these three elements together. That means the measurement component, the computer simulation component, and the participation, the interactiveness.

In some sense, we’re going to create virtual worlds that may look like our real world, copies of our world that allow us to explore polices in advance or certain kinds of planning in advance. Just to make it a little bit more concrete, we could, for example, check out a new airport or a new city quarter before it’s being built. Today we have these architectural plans, and competitions, and then the most beautiful design will have win. But then, in practice, it can happen that it doesn’t work so well. People have to stand in line in queues, or are obstructing each other. Many things may not work out as the architect imagined that.                 

What if we populated basically these architectural plans with real people? They could check it out, live there for some months and see how much they like it. Maybe even change the design. That means, the people that would use these facilities and would live in these new quarters of the city could actually participate in the design of the city. In the same sense, you can scale that up. Just imagine Google Earth or Google Street View filled with people, and have something like a serious kind of Second Life. Then we could have not just one history; we can check out many possible futures by actually trying out different financial architectures, or different decision rules, or different intellectual property rights and see what happens.                 

We could have even different virtual planets, with different laws and different cultures and different kinds of societies. And you could choose the planet that you like most. So in some sense, now a new age is opening up with almost unlimited resources. We’re, of course, still living in a material world, in which we have a lot of restrictions, because resources are limited. They’re scarce and there’s a lot of competition for these scarce resources. But information can be multiplied as much as you like. Of course, there is some cost, and also some energy needed for that, but it’s relatively low cost, actually. So we can create really almost infinite new possibilities for creativity, for productivity, for interaction. And it is extremely interesting that we have a completely new world coming up here, absolutely new opportunities that need to be checked out.

But now the question is: how will it all work? Or how would you make it work? Because the information systems that we have created are even more complex than our financial system. We know the financial system is extremely difficult to regulate and to control. How would you want to control an information system of this complexity? I think that cannot be done top-down. We are seeing now a trend that complex systems are run in a more and more decentralized way. We’re learning somehow to use self-organization principles in order to run these kinds of systems. We have seen that in the Internet, we are seeing t for smart grids, but also for traffic control.

I have been working myself on these new ways of self-control. It’s very interesting. Classically one has tried to optimize traffic flow. It’s so demanding that even our fastest supercomputers can’t do that in a strict sense, in real time. That means one needs to make simplifications. But in principle, what one is trying to do is to impose an optimal traffic light control top-down on the city. The supercomputer believes to know what is best for all the cars, and that is imposed on the system.                 

We have developed a different approach where we said: given that there is a large degree of variability in the system, the most important aspect is to have a flexible adaptation to the actual traffic conditions. We came up with a system where traffic flows control the traffic lights. It turns out this makes much better use of scarce resources, such as space and time. It works better for cars, it works better for public transport and for pedestrians and bikers, and it’s good for the environment as well.                 

The age of social innovation

There’s a new kind of socio-inspired technology coming up, now. Society has many wonderful self-organization mechanisms that we can learn from, such as trust, reputation, culture. If we can learn how to implement that in our technological system, that is worth a lot of money; billions of dollars, actually. We think this is the next step after bio-inspired technology.

The next big step is to focus on society. We’ve had an age of physics; we’re now in an age of biology. I think we are entering the age of social innovation as we learn to make sense of this even bigger complexity of society. It’s like a new continent to discover. It’s really fascinating what now becomes understandable with the availability of Big Data about human activity patterns, and it will open a door to a new future.

What will be very important in order to make sense of the complexity of our information society is to overcome the disciplinary silos of science; to think out of the box. Classically we had social sciences, we had economics, we had physics and biology and ecology, and computer science and so on. Now, our project is trying to bring those different fields together, because we’re deeply convinced that without this integration of different scientific perspectives, we cannot anymore make sense of these hyper-connected systems that we have created.                 

For example, computer science requires complexity science and social science to understand those systems that have been created and will be created. Why is this? Because the dense networking and to the complex interaction between the components creates self-organization, and emergent phenomena in those systems. The flash crash is just one example that shows that unexpected things can happen. We know that from many systems.

Complexity theory is very important here, but also social science. And why is that? Because the components of these information communication systems are becoming more and more human-like. They’re communicating with each other. They’re making a picture of the outside world. They’re projecting expectations into the future, and they are taking autonomous decisions. That means if those computers interact with each other, it’s creating an artificial social system in some sense.                 

In the same way, social science will need complexity science and computer science. Social science needs the data that computer science and information communication technology can provide. Now, and even more in the future, those data traces about human activities allow us eventually to detect patterns and kind of laws of human behavior. It will be only possible through the collaboration with computer science to get those data, and to make sense of what is happening actually in society. I don’t need to mention that obviously there are complex dynamics going on in society; that means complexity science is needed for social science as well.

In the same sense, we could say complexity science needs social science and computer science to become practical. To go a step beyond talking about butterfly effects and chaos and turbulence. And to make sure that the thinking of complexity science will pervade our thinking in the natural engineering and social sciences and allow us to understand the real problems of our world. That is kind of the essence: that we need to bring these different scientific fields together. We have actually succeeded to build up these integrated communities in many countries all over the world, ready to go, as soon as money becomes available for that.        

Big Data is not a solution per se. Even the most powerful machine learning algorithm will not be sufficient to make sense of our world, to understand the principles according to which our world is working. This is important to recognize. The great challenge is to marry data with theories, with models. Only then will we be able to make sense of the useful bits of data. It’s like finding a needle in the haystack. The more data you have, the more difficult it may be to find this needle, actually, to a certain extent. And there is this danger of over-fitting, of being distracted from important details. We are certainly already in an age where we’re flooded with information, and our attention span can actually not process all that information. That means there is a danger that this undermines our wisdom, if our attention is attracted by the wrong details of information. So we are confronted with the problem of finding the right institutions and tools and instruments for decision-making.        

The Living Earth Simulator will basically take the data that is gathered by the Internet, search requests, and created by sensor networks, and feed it into big computer simulations that are based on models of social and economic and technological behavior. In this way, we’ll be able to look at what-if scenarios. We hope to get a better understanding, for example, of financial systems and some answers to controversial questions such as how much leverage effect is good? Under what conditions is ‘naked short-selling’ beneficial? When does it destabilize markets? To what extent is high frequency trading good, or it can it also have side effects? All these kinds of questions, which are difficult to answer. Or how to deal best with the situation in Europe, where we have trouble, obviously, in Greece, but also kind of contagious effects on other countries and on the rest of the financial system. It would be very good to have the models and the data that allow us actually to simulate these kinds of scenarios and to take better-informed decisions. (…)

The idea is to have an open platform to create a data and model commons that everybody can contribute to, so people could upload data and models, and others could use that. People would also judge the quality of the data and models and rate them according to their criteria. And we also point out the criteria according to which they’re doing the rating. But in principle, everybody can contribute and everybody can use it. (…)                            

We have much better theories, also, that allows us to make sense of those data. We’re entering into an age where we can understand society and the economy much better, namely as complex self-organizing systems.           

It will be important to guide us into the future because we are creating very powerful systems. Information society will transform our society fundamentally and we shouldn’t just let it happen. We want to understand how that will change our society, and what are the different pathes that our society may take, and decide for the one that we want it to take. For that, we need to have a much better understanding.

Now a lot of social activity data are becoming available through Facebook and Twitter and Google search requests and so on. This is, of course, a huge opportunity for business. Businesses are talking about the new oil, personal data as new asset class. There’s something like a gold rush going on. That also, of course, has huge opportunities for science, eventually we can make sense of complex systems such as our society. There are different perspectives on this. They range from some people who think that information communication technologies eventually will create a God’s-eye view: systems that make sense of all human activities, and the interactions of people, while others are afraid of a Big Brother emerging.                 

The question is how to handle that situation. Some people say we don’t need privacy in society. Society is undergoing a transformation, and privacy is not anymore needed. I don’t actually share this point of view, as a social scientist, because public and private are two sides of the same coin, so they cannot exist without the other side. It is very important, for a society to work, to have social diversity. Today, we know to appreciate biodiversity, and in the same way we need to think about social diversity, because it’s a motor of innovation. It’s also an important factor for societal resilience. The question now is how all those data that we are creating, and also recommender system and personalized services are going to impact people’s decision-making behavior, and society overall.                 

This is what we need to look at now. How is people’s behavior changing through these kinds of data? How are people changing their behavior when they feel they’re being observed? Europe is quite sensitive about privacy. The project we are working on is actually trying to find a balance between the interest of companies and Big Data of governments and individuals. Basically we want to develop technologies that allow us to find this balance, to make sure that all the three perspectives actually are taken into account. That you can make big business, but also at the same time, the individual’s privacy is respected. That individuals have more control over their own data, know what is happening with them, have influence on what is happening with them. (…)           

In some sense, we want to create a new data and model commons, a new kind of language, a new public good that allows people to do new things. (…)

My feeling is that actually business will be made on top of this sea of data that’s being created. At the moment data is kind of the valuable resource, right? But in the future, it will probably be a cheap resource, or even a free resource to a certain extent, if we learn how to deal with openness of data. The expensive thing will be what we do with the data. That means the algorithms, the models, and theories that allow us to make sense of the data.”

Dirk Helbing, physicist, and professor of sociology at ETH Zurich – Swiss Federal Institute of Technology, in particular for modelling and simulation, A New Kind Of Socio-inspired Technology, Edge Conversation, June 19, 2012. (Illustration: WSF)

See also:

☞ Dirk Helbing, New science and technology to understand and manage our complex world in a more sustainable and resilient way (pdf) (presentation), ETH Zurich
Why does nature so consistently organize itself into hierarchies? Living Cells Show How to Fix the Financial System
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Networks tag on Lapidarium notes

May
22nd
Tue
permalink

The reinvention of the night. A history of the night in early modern Europe

                        
                                                                  Bridgeman Art Library

"During the previous generation or so, elites across Europe had moved their clocks forward by several hours. No longer a time reserved for sleep, the night time was now the right time for all manner of recreational and representational purposes. This is what Craig Koslofsky calls “nocturnalisation”, defined as “the ongoing expansion of the legitimate social and symbolic uses of the night”, a development to which he awards the status of “a revolution in early modern Europe”. (…)

The shift from street to court and from day to night represented “the sharpest break in the history of celebrations in the West”. (…) By the time of Louis XIV, all the major events – ballets de cour, operas, balls, masquerades, firework displays – took place at night. (…) The kings, courtiers – and those who sought to emulate them – adjusted their daily timetable accordingly. Unlike Steele’s friend, they rose and went to bed later and later. Henry III of France, who was assassinated in 1589, usually had his last meal at 6 pm and was tucked up in bed by 8. Louis XIV’s day began with a lever at 9 and ended (officially) at around midnight. (…)

As with so much else at Versailles, this was a development that served to distance the topmost elite from the rest of the population. Koslofsky speculates that it was driven by the need to find new sources of authority in a confessionally fragmented age.

More directly – and convincingly – authoritarian was the campaign to “colonize” the night by reclaiming it from the previously dominant marginal groups. The most effective instrument was street-lighting, introduced to Paris in 1667. (…)
In 1673, Madame de Sévigné [wrote]: “We found it pleasant to be able to go, after midnight, to the far end of the Faubourg Saint-Germain”. (…)

Street lighting had made life more difficult for criminals, but also for those who believed in ghosts, devils and things that go bump. Addressing an imaginary atheist in a sermon in 1629, John Donne invited him to look ahead just a few hours until midnight: “wake then; and then dark and alone, Hear God and ask thee then, remember that I asked thee now, Is there a God? and if thou darest, say No”. A hundred years later, there were plenty of Europeans prepared to say “No”. In 1729, the Paris police expressed grave anxiety about the spread of irreligion through late-night café discussions of the existence or non-existence of God.”

— Tim Blanning, review of Craig Koslofsky’s "Evening’s Empire. A history of the night in early modern Europe", The reinvention of the night, TLS, Sep 21, 2011.

See also:

☞ Craig Koslofsky, Evening’s Empire — extensive excerpts at Google Books
☞ Benjamin Schwarz, Night Owls, The Atlantic, Apr, 2012

May
17th
Thu
permalink

E. O. Wilson on human evolution, altruism and a ‘new Enlightenment’

                image      

“History makes no sense without prehistory, and prehistory makes no sense without biology.”

— E. O. Wilson, Seminars About Long-term Thinking, The Long Now Foundation, Apr 20, 2012.

"Scientific advances are now good enough for us to address coherently questions of where we came from and what we are. But to do so, we need to answer two more fundamental questions. The first is why advanced social life exists in the first place and has occurred so rarely. The second is what are the driving forces that brought it into existence.

A conflict between individual and group-selected traits

"Only the understanding of evolution offers a chance to get a real understanding of the human species. We are determined by the interplay between individual and group selection where individual selection is responsible for much of what we call sin, while group selection is responsible for the greater part of virtue. We’re all in constant conflict between self-sacrifice for the group on the one hand and egoism and selfishness on the other. I go so far as to say that all the subjects of humanities, from law to the creative arts are based upon this play of individual versus group selection. (…) And it is very creative and probably the source of our striving, our inventiveness and imagination. It’s that eternal conflict that makes us unique.

Q: So how do we negotiate this conflict?

E O. W: We don’t. We have to live with it.

Q: Which element of this human condition is stronger?

E O. W: Let’s put it this way: If we would be mainly influenced by group selection, we would be living in kind of an ant society. (…)

Q: What determines which ideology is predominant in a society?

E O. W: If your territory is invaded, then cooperation within the group will be extreme. That’s a human instinct. If you are in a frontier area, however, then we tend to move towards the extreme individual level. That seems to be a good part of the problem still with America. We still think we’re on the frontier, so we constantly try to put forward individual initiative and individual rights and rewards based upon individual achievement. (…)”

Edward O. Wilson, American biologist, researcher (sociobiologybiodiversity), theorist (consiliencebiophilia), naturalist (conservationist) and author, Interview with Edward O. Wilson: The Origin of Morals, Der Spiegel, 2013

Eusociality, where some individuals reduce their own reproductive potential to raise others’ offspring, is what underpins the most advanced form of social organization and the dominance of social insects and humans. One of the key ideas to explain this has been kin selection theory or inclusive fitness, which argues that individuals cooperate according to how they are related. I have had doubts about it for quite a while. Standard natural selection is simpler and superior. Humans originated by multilevel selection—individual selection interacting with group selection, or tribe competing against tribe. We need to understand a great deal more about that. (…)

We should consider ourselves as a product of these two interacting and often competing levels of evolutionary selection. Individual versus group selection results in a mix of altruism and selfishness, of virtue and sin, among the members of a society. If we look at it that way, then we have what appears to be a pretty straightforward answer as to why conflicted emotions are at the very foundation of human existence. I think that also explains why we never seem to be able to work things out satisfactorily, particularly internationally.

Q: So it comes down to a conflict between individual and group-selected traits?

Yes. And you can see this especially in the difficulty of harmonizing different religions. We ought to recognize that religious strife is not the consequence of differences among people. It’s about conflicts between creation stories. We have bizarre creation myths and each is characterized by assuring believers that theirs is the correct story, and that therefore they are superior in every sense to people who belong to other religions. This feeds into our tribalistic tendencies to form groups, occupy territories and react fiercely to any intrusion or threat to ourselves, our tribe and our special creation story. Such intense instincts could arise in evolution only by group selection—tribe competing against tribe. For me, the peculiar qualities of faith are a logical outcome of this level of biological organization.

Q: Can we do anything to counter our tribalistic instincts?

I think we are ready to create a more human-centered belief system. I realize I sound like an advocate for science and technology, and maybe I am because we are now in a techno-scientific age. I see no way out of the problems that organized religion and tribalism create other than humans just becoming more honest and fully aware of themselves. Right now we’re living in what Carl Sagan correctly termed a demon-haunted world. We have created a Star Wars civilization but we have Paleolithic emotions, medieval institutions and godlike technology. That’s dangerous. (…)

I’m devoted to the kind of environmentalism that is particularly geared towards the conservation of the living world, the rest of life on Earth, the place we came from. We need to put a lot more attention into that as something that could unify people. Surely one moral precept we can agree on is to stop destroying our birthplace, the only home humanity will ever have.

Q: Do you believe science will help us in time?

We can’t predict what science is going to come up with, particularly on genuine frontiers like astrophysics. So much can change even within a single decade. A lot more is going to happen when the social sciences finally join the biological sciences: who knows what will come out of that in terms of describing and predicting human behavior? But there are certain things that are almost common sense that we should not do.

Q: What sort of things shouldn’t we do?

Continue to put people into space with the idea that this is the destiny of humanity. It makes little sense to continue exploration by sending live astronauts to the moon, and much less to Mars and beyond. It will be far cheaper, and entail no risk to human life, to explore space with robots. It’s a commonly stated idea that we can have other planets to live on once we have used this one up. That is nonsense. We can find what we need right here on this planet for almost infinite lengths of time, if we take good care of it.

A New Enlightenment

Q: What is it important to do now?

The title of my final chapter is “A New Enlightenment”. I think we ought to have another go at the Enlightenment and use that as a common goal to explain and understand ourselves, to take that self-understanding which we so sorely lack as a foundation for what we do in the moral and political realm. This is a wonderful exercise. It is about education, science, evaluating the creative arts, learning to control the fires of organized religion and making a better go of it.

Q: Could you be more concrete about this new Enlightenment?

I would like to see us improving education worldwide and putting a lot more emphasis—as some Asian and European countries have—on science and technology as part of basic education.”

E. O. Wilson, E. O. Wilson: from altruism to a new Enlightenment, New Scientist, 24 April 2012.

"I think science is now up to the job. We need to be harnessing our scientific knowledge now to get a better, science-based self-understanding.

Q: It seems that, in this process, you would like to throw religions overboard altogether?

E O. W: No. That’s a misunderstanding. I don’t want to see the Catholic Church with all of its magnificent art and rituals and music disappear. I just want to have them give up their creation stories, including especially the resurrection of Christ.

Q: That might well be a futile endeavour …

E O. W: There was this American physiologist who was asked if Mary’s bodily ascent from Earth to Heaven was possible. He said, “I wasn’t there; therefore, I’m not positive that it happened or didn’t happen; but of one thing I’m certain: She passed out at 10,000 meters.” That’s where science comes in. Seriously, I think we’re better off with no creation stories.

Q: With this new Enlightenment, will we reach a higher state of humanity?

E O. W: Do we really want to improve ourselves? Humans are a very young species, in geologic terms, and that’s probably why we’re such a mess. We’re still living with all this aggression and ability to go to war. But do we really want to change ourselves? We’re right on the edge of an era of being able to actually alter the human genome. But do we want that? Do we want to create a race that’s more rational and free of many of these emotions? My response is no, because the only thing that distinguishes us from super-intelligent robots are our imperfect, sloppy, maybe even dangerous emotions. They are what makes us human.”

Edward O. Wilson, American biologist, researcher (sociobiologybiodiversity), theorist (consiliencebiophilia), naturalist (conservationist) and author, Interview with Edward O. Wilson: The Origin of Morals, originally in P. Bethge, J. Grolle, Wir sind ein Schlamassel, Der Spiegel, 8/2013.

A “Social Conquest of the Earth”

"Q: What are some striking examples for you of the legacy of this evolutionary process?

Almost everything. All the way from passion at football games to war to the constant need to suppress selfish behavior that ranges over into criminal behavior to the necessary extolling of altruism by groups, to group approval and reward of people who are heroes or altruists.

Constant turmoil occurs in modern human societies and what I’m suggesting is that turmoil is endemic in the way human advanced social behavior originated in the first place. It’s by group selection that occurred favoring altruism versus individual level selection, which by and large, not exclusively, favor individual and selfish behavior.

We’re hung in the balance. We’ll never reach either one extreme or the other. One extreme would take us to the level of ants and bees and the other would mean that you have dissolution of society.

Q: One point you make in your book is that this highly social kind of behavior that we’ve evolved has allowed us to be part of the social conquest of earth, but it’s also had an unfortunate effect of endangering a lot of the world’s biodiversity. Does that make you pessimistic? If this is just part of the way we’ve evolved, is there going to be any way out of it?

That’s a very big question. In other words, did the pathway that led us to advanced social behavior and conquest make it inevitable that we will destroy most of what we’ve conquered? That is the question of questions.

I’m optimistic. I think that we can pass from conquerors to stewards. We have the intellectual and moral capacity to do it, but I’ve also felt very strongly that we needed a much better understanding of who we are and where we came from. We need answers to those questions in order to get our bearings toward a successful long-term future, that means a future for ourselves, our species and for the rest of life.

I realize that sounds a little bit like it’s coming from a pulpit but basically that’s what I’ve had in my mind. In writing A Social Conquest of Earth, I very much had in mind that need for self-understanding, and I thought we were very far short, and we remain very far short, of self-understanding. We have a kind of resistance toward honest self-understanding as a species and I think that resistance is due in part to our genetic history. And now, can we overcome it? I think so.”

E. O. Wilson, American biologist, researcher in sociobiology, biodiversity, theorist, naturalist and author, interviewed by Carl Zimmer, What Does E.O. Wilson Mean By a “Social Conquest of the Earth”, Smithsonian.com, March 22, 2012

See also:

Edward O. Wilson “The Social Conquest of Earth”, Fora.tv video, 20 Apr 2012
☞ Richard Dawkins, The descent of Edward Wilson. “A new book on evolution by a great biologist makes a slew of mistakes”, Prospect, May 24, 2012
The Original Colonists, The New York Times, May 11, 2012:
“Mythmaking could never discover the origin and meaning of humanity” — and contemporary philosophy is also irrelevant, having “long ago abandoned the foundational questions about human existence.” The proper approach to answering these deep questions is the application of the methods of science, including archaeology, neuroscience and evolutionary biology. Also, we should study insects.”
Sam Harris on the ‘selfish gene’ and moral behavior
Anthropocene: “the recent age of man”. Mapping Human Influence on Planet Earth, Lapidarium notes
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate
On the Origins of the Arts , Sociobiologist E.O. Wilson on the evolution of culture, Harvard Magazine, May-June, 2013
Anthropology tag on Lapidarium notes
May
5th
Sat
permalink

The Paradox of Contemporary Cultural History. We are clinging as never before to the familiar in matters of style and culture

"For most of the last century, America’s cultural landscape—its fashion, art, music, design, entertainment—changed dramatically every 20 years or so. But these days, even as technological and scientific leaps have continued to revolutionize life, popular style has been stuck on repeat, consuming the past instead of creating the new. (…)

The past is a foreign country. Only 20 years ago the World Wide Web was an obscure academic thingamajig. All personal computers were fancy stand-alone typewriters and calculators that showed only text (but no newspapers or magazines), played no video or music, offered no products to buy. E-mail (a new coinage) and cell phones were still novelties. Personal music players required cassettes or CDs. Nobody had seen a computer-animated feature film or computer-generated scenes with live actors, and DVDs didn’t exist. The human genome hadn’t been decoded, genetically modified food didn’t exist, and functional M.R.I. was a brand-new experimental research technique. Al-Qaeda and Osama bin Laden had never been mentioned in The New York Times. China’s economy was less than one-eighth of its current size. CNN was the only general-interest cable news channel. Moderate Republicans occupied the White House and ran the Senate’s G.O.P. caucus. Since 1992, as the technological miracles and wonders have propagated and the political economy has transformed, the world has become radically and profoundly new. (…)

During these same 20 years, the appearance of the world (computers, TVs, telephones, and music players aside) has changed hardly at all, less than it did during any 20-year period for at least a century. The past is a foreign country, but the recent past—the 00s, the 90s, even a lot of the 80s—looks almost identical to the present. This is the First Great Paradox of Contemporary Cultural History. (…)

Madonna to Gaga

20 years after Hemingway published his war novel For Whom the Bell Tolls a new war novel, Catch-22, made it seem preposterously antique.

Now try to spot the big, obvious, defining differences between 2012 and 1992. Movies and literature and music have never changed less over a 20-year period. Lady Gaga has replaced Madonna, Adele has replaced Mariah Carey—both distinctions without a real difference—and Jay-Z and Wilco are still Jay-Z and Wilco. Except for certain details (no Google searches, no e-mail, no cell phones), ambitious fiction from 20 years ago (Doug Coupland’s Generation X, Neal Stephenson’s Snow Crash, Martin Amis’s Time’s Arrow) is in no way dated, and the sensibility and style of Joan Didion’s books from even 20 years before that seem plausibly circa-2012. (…)

Nostalgic Gaze

Ironically, new technology has reinforced the nostalgic cultural gaze: now that we have instant universal access to every old image and recorded sound, the future has arrived and it’s all about dreaming of the past. Our culture’s primary M.O. now consists of promiscuously and sometimes compulsively reviving and rejiggering old forms. It’s the rare “new” cultural artifact that doesn’t seem a lot like a cover version of something we’ve seen or heard before. Which means the very idea of datedness has lost the power it possessed during most of our lifetimes. (…)

Loss of Appetite

Look through a current fashion or architecture magazine or listen to 10 random new pop songs; if you didn’t already know they were all things from the 2010s, I guarantee you couldn’t tell me with certainty they weren’t from the 2000s or 1990s or 1980s or even earlier. (The first time I heard a Josh Ritter song a few years ago, I actually thought it was Bob Dylan.) In our Been There Done That Mashup Age, nothing is obsolete, and nothing is really new; it’s all good. I feel as if the whole culture is stoned, listening to an LP that’s been skipping for decades, playing the same groove over and over. Nobody has the wit or gumption to stand up and lift the stylus.

Why is this happening? In some large measure, I think, it’s an unconscious collective reaction to all the profound nonstop newness we’re experiencing on the tech and geopolitical and economic fronts. People have a limited capacity to embrace flux and strangeness and dissatisfaction, and right now we’re maxed out. So as the Web and artificially intelligent smartphones and the rise of China and 9/11 and the winners-take-all American economy and the Great Recession disrupt and transform our lives and hopes and dreams, we are clinging as never before to the familiar in matters of style and culture.

If this stylistic freeze is just a respite, a backward-looking counter-reaction to upheaval, then once we finally get accustomed to all the radical newness, things should return to normal—and what we’re wearing and driving and designing and producing right now will look totally démodé come 2032. Or not. Because rather than a temporary cultural glitch, these stagnant last couple of decades may be a secular rather than cyclical trend, the beginning of American civilization’s new chronic condition, a permanent loss of appetite for innovation and the shockingly new. After all, such a sensibility shift has happened again and again over the last several thousand years, that moment when all great cultures—Egyptian, Roman, Mayan, Islamic, French, Ottoman, British—slide irrevocably into an enervated late middle age. (…)

Plus ça change, plus c’est la même chose has always meant that the constant novelty and flux of modern life is all superficial show, that the underlying essences endure unchanged. But now, suddenly, that saying has acquired an alternative and nearly opposite definition: the more certain things change for real (technology, the global political economy), the more other things (style, culture) stay the same.

But wait! It gets still stranger, because even as we’ve fallen into this period of stylistic paralysis and can’t get up, more people than ever before are devoting more of their time and energy to considering and managing matters of personal style.

And why did this happen? In 1984, a few years after “yuppie” was coined, I wrote an article in Time positing that “yuppies are, in a sense, heterosexual gays. Among middle-class people, after all, gays formed the original two-income households and were the original gentrifiers, the original body cultists and dapper health-club devotees, the trendy homemakers, the refined, childless world travelers.” Gays were the lifestyle avant-garde, and the rest of us followed. (…)

Amateur Stylists

People flock by the millions to Apple Stores (1 in 2001, 245 today) not just to buy high-quality devices but to bask and breathe and linger, pilgrims to a grand, hermetic, impeccable temple to style—an uncluttered, glassy, super-sleek style that feels “contemporary” in the sense that Apple stores are like back-on-earth sets for 2001: A Space Odyssey, the early 21st century as it was envisioned in the mid-20th. And many of those young and young-at-heart Apple cultists-cum-customers, having popped in for their regular glimpse and whiff of the high-production-value future, return to their make-believe-old-fashioned lives—brick and brownstone town houses, beer gardens, greenmarkets, local agriculture, flea markets, steampunk, lace-up boots, suspenders, beards, mustaches, artisanal everything, all the neo-19th-century signifiers of state-of-the-art Brooklyn-esque and Portlandish American hipsterism.

Moreover, tens of millions of Americans, the uncool as well as the supercool, have become amateur stylists—scrupulously attending, as never before, to the details and meanings of the design and décor of their homes, their clothes, their appliances, their meals, their hobbies, and more. The things we own are more than ever like props, the clothes we wear like costumes, the places where we live, dine, shop, and vacation like stage sets. And angry right-wingers even dress in 18th-century drag to perform their protests. Meanwhile, why are Republicans unexcited by Mitt Romney? Because he seems so artificial, because right now we all crave authenticity.

The Second Paradox

So, these two prime cultural phenomena, the quarter-century-long freezing of stylistic innovation and the pandemic obsession with style, have happened concurrently—which appears to be a contradiction, the Second Great Paradox of Contemporary Cultural History. Because you’d think that style and other cultural expressions would be most exciting and riveting when they are unmistakably innovating and evolving.

Part of the explanation, as I’ve said, is that, in this thrilling but disconcerting time of technological and other disruptions, people are comforted by a world that at least still looks the way it did in the past. But the other part of the explanation is economic: like any lucrative capitalist sector, our massively scaled-up new style industry naturally seeks stability and predictability. Rapid and radical shifts in taste make it more expensive to do business and can even threaten the existence of an enterprise. One reason automobile styling has changed so little these last two decades is because the industry has been struggling to survive, which made the perpetual big annual styling changes of the Golden Age a reducible business expense. Today, Starbucks doesn’t want to have to renovate its thousands of stores every few years. If blue jeans became unfashionable tomorrow, Old Navy would be in trouble. And so on. Capitalism may depend on perpetual creative destruction, but the last thing anybody wants is their business to be the one creatively destroyed. Now that multi-billion-dollar enterprises have become style businesses and style businesses have become multi-billion-dollar enterprises, a massive damper has been placed on the general impetus for innovation and change.

It’s the economy, stupid. The only thing that has changed fundamentally and dramatically about stylish objects (computerized gadgets aside) during the last 20 years is the same thing that’s changed fundamentally and dramatically about movies and books and music—how they’re produced and distributed, not how they look and feel and sound, not what they are. This democratization of culture and style has two very different but highly complementary results. On the one hand, in a country where an adorably huge majority have always considered themselves “middle class,” practically everyone who can afford it now shops stylishly—at Gap, Target, Ikea, Urban Outfitters, Anthropologie, Barnes & Noble, and Starbucks. Americans: all the same, all kind of cool! And yet, on the other hand, for the first time, anyone anywhere with any arcane cultural taste can now indulge it easily and fully online, clicking themselves deep into whatever curious little niche (punk bossa nova, Nigerian noir cinema, pre-war Hummel figurines) they wish. Americans: quirky, independent individualists!

We seem to have trapped ourselves in a vicious cycle—economic progress and innovation stagnated, except in information technology; which leads us to embrace the past and turn the present into a pleasantly eclectic for-profit museum; which deprives the cultures of innovation of the fuel they need to conjure genuinely new ideas and forms; which deters radical change, reinforcing the economic (and political) stagnation. I’ve been a big believer in historical pendulum swings—American sociopolitical cycles that tend to last, according to historians, about 30 years. So maybe we are coming to the end of this cultural era of the Same Old Same Old. As the baby-boomers who brought about this ice age finally shuffle off, maybe America and the rich world are on the verge of a cascade of the wildly new and insanely great. Or maybe, I worry some days, this is the way that Western civilization declines, not with a bang but with a long, nostalgic whimper.”

Kurt Andersen, American novelist and journalist, to read full essay click You Say You Want a Devolution?, Vanity Fair, Jan 2012 (Illustration by James Taylor)

See also:

Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’
Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers

Mar
20th
Tue
permalink

Nicholas Carr on the evolution of communication technology and our compulsive consumption of information

        

"The term “information age” gets across our sense that we’re engulfed in information in a way that is very different from anything that’s come before. (…)

I think it’s pretty clear that humans have a natural inclination, even compulsion, to seek out information. We want not only to be entertained but to know everything that is going on around us. And so as these different mass media have proliferated, we’ve gone along with the technology and consumed – to put an ugly term on it – more information. (…)

"In “The Shallows” I argue that the Internet fundamentally encourages very rapid gathering of small bits of information – the skimming and scanning of information to quickly get the basic gist of it. What it discourages are therefore the ways of thinking that require greater attentiveness and concentration, everything from contemplation to reflection to deep reading.

The Internet is a hypertext system, which means that it puts lots of links in a text. These links are valuable to us because they allow us to go very quickly between one bit of information and another. But there are studies that compare what happens when a person reads a printed page of text versus when you put links into that text. Even though we may not be conscious of it, a link represents a little distraction, a little division of attention. You can see in the evidence that reading comprehension goes down with hypertext versus plaintext. (…)

The reason why I start with Tom Standage’s book is because we tend to think of the information age as something entirely new. In fact, people have been wrestling with information for many centuries. If I was going to say when the information age started, I would probably say the 15th century with the invention of the mechanical clock, which turned time into a measurable flow, and the printing press, which expanded our ability to tap into other kinds of thinking. The information age has been building ever since then.

Standage covers one very important milestone in that story, which is the building of the telegraph system in the 19th century. The telegraph was the first really efficient system for long-distance, almost instantaneous communication. It’s a short book, a very lively read, and it shows how this ability to throw one’s thoughts across the world changed all aspects of society. It certainly changed the business world. Suddenly you could coordinate a business not just in a local area, but across the country or across oceans. It had a lot of social implications too, as people didn’t have to wait for letters to come over the course of days. And as Standage points out, it inspired a lot of the same hopes and concerns that we have today with the Internet. (…)

If “The Information” is a sprawling, sweeping story of how information has changed over time, one thing it doesn’t get into is the commercial nature of information as a good that is bought and sold. That’s the story Tim Wu tells in ”The Master Switch.” His basic argument is that whenever a new communication medium arises, a similar pattern occurs. The technology starts off as a hobbyist’s passion, democratic and open. Then over time, as it becomes more popular, it starts to be dominated by corporate interests and becomes much more formalised, before eventually being displaced by a new technology.

You see this with radio, for instance. In the beginning, radio was very much a hobbyist’s technology. When people bought a radio back then it wasn’t just a receiver, it was a transmitter. People would both receive and transmit information through their radio – it was an early version of the blogosphere in some ways. Then dominant radio corporations come in, and suddenly radio isn’t a democratic tool for transmitting and receiving information, it’s purely for receiving. Tim Wu tells a series of stories like this, and television. All of that history is really a backdrop for a discussion of the Internet, which Wu suggests will likely follow the same cycle.

So far, I think we’ve seen that. When the World Wide Web appeared 20 years ago, there was all kinds of utopian, democratic rhetoric about how it was breaking the hold of big corporations over media and communications. You saw a huge explosion of personal websites. But over time you saw corporate interests begin to dominate the web – Google, Facebook and so on. If you look at how much time a user devotes to Facebook, it shows a consolidation and centralisation of web activity onto these large corporate sites. (…)

Matthew Crawford argues that we’re losing our sense of importance of actual physical interaction with the natural world. He says that the richest kind of thinking that’s open to human beings is not thinking that takes place in the mind but thinking that involves both the mind and the body interacting with the world. Whereas when we’re sitting at our computer or looking at our smartphone, we’re in a world of symbols. It seems to me that one of the dangers of the Internet, and the way that the screen mediates all work and other kinds of processing, is that not only are we distancing ourselves from interaction with the world, but we’re beginning to lose sight of the fact that that’s even important. (…)

As more and more of the physical world is operated by software and computers, we shut off interacting with the world. Crawford, in addition to being a political philosopher, is also a motorcycle mechanic. And a lot of the book is simply stories of being a mechanic. One of the points he makes is that people used to know how their cars worked. They could open the hood, see all of the parts of their engine, change their own oil. Now when you open your hood you can’t touch anything and you don’t know how the thing works. We’ve allowed ourselves to be removed from the physical world. We’re told just to look at our GPS screen and forget how the engine works.

Q: A key point about the information age we should mention is that societies have moved from an industrial economy to a service economy, with more people in white-collar jobs and increasing income disparity as a result.

That’s absolutely true. More and more of our basic jobs, due to broad shifts in the economy, involve manipulating symbols, whether it’s words, numbers or images. That too serves to distance ourselves from manual manipulation of the world. We have offloaded all of those jobs to specialists in order to spend more time working with symbols.

Q: Tell us why you’re closing with Gary Shteyngart’s novel “Super Sad True Love Story.”

I think that novelists, and other artists, are only beginning to grapple with the implications of the Internet, smartphones and all of that. Literature provides a different and very valuable way of perceiving those implications, so I decided to end with a novel. This book is both funny and extremely horrifying. It’s set in a future that is very close in some ways to the present. Shteyngart takes phenomena and trends that are around us but we don’t even notice, pushes them a little more extreme, and suddenly it gives you a new way to think about not only where we’re heading but where we already are. (…)

As is true with most dystopian science fiction, I don’t think it’s an attempt to portray what’s going to happen. It’s more an insight into how much we and our societies have changed in a very short time, without really being aware of it. If somebody from even 10 years ago suddenly dropped into the world and saw us all walking down the street staring at these little screens, hitting them with our thumbs, it would seem very strange.

It is becoming more and more normal to monitor your smartphone even while having a conversation with a friend, spouse or child. A couple will go out to a restaurant and the first thing they will each do is stick their iPhone or Android on the table in front of them, basically announcing that they’re not going to give their full attention to the other person. So technology seems to be changing even our relationships and social expectations. (…)

Q: In a hundred years’ time, what do you think the legacy of the early Internet will be?

I think the legacy will both be of enormous benefits – particularly those that can be measured in terms of efficiency and productivity, but also the ability for people to communicate with others – and also of more troubling consequences. We are witnessing an erosion not only of privacy but of the sense that privacy of the individual is important. And we are seeing the commercialisation of processes of communication, affiliation and friendship that used to be considered intimate.

You’re probably right to talk about a hundred years to sort this all out. There’s a whole lot of threads to the story that being in the midst of it are hard to see properly, and it’s difficult to figure out what the balance of good, bad and indifferent is.

Q: What’s next in the immediate five or 10 years for the information age?

More of the same. Overall I think the general trend, as exemplified by social networks and the evolution of Google, is towards ever smaller bits of information delivered ever more quickly to people who are increasingly compulsive consumers of media and communication products. So I would say more screens, smaller screens, more streams of information coming at us from more directions, and more of us adapting to that way of living and thinking, for better or worse.

Q: So we’re not at the apex of the information age? That peak is yet to come?

All indications are that we’re going to see more rather than less.”

Nicholas Carr, American writer, interwieved by Alec Ash, Our compulsive consumption of information, The Browser - Salon.com, Mar 19, 2012.

See also:

Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
Nicholas Carr on Books That Are Never Done Being Written

Jan
6th
Fri
permalink

Why Do Languages Die? Urbanization, the state and the rise of nationalism

       

"The history of the world’s languages is largely a story of loss and decline. At around 8000 BC, linguists estimate that upwards of 20,000 languages may have been in existence. Today the number stands at 6,909 and is declining rapidly. By 2100, it is quite realistic to expect that half of these languages will be gone, their last speakers dead, their words perhaps recorded in a dusty archive somewhere, but more likely undocumented entirely. (…)

The problem with globalization in the latter sense is that it is the result, not a cause, of language decline. (…) It is only when the state adopts a trade language as official and, in a fit of linguistic nationalism, foists it upon its citizens, that trade languages become “killer languages.” (…)

Most importantly, what both of the above answers overlook is that speaking a global language or a language of trade does not necessitate the abandonment of one’s mother tongue. The average person on this planet speaks three or four languages. (…)

The truth is, most people don’t “give up” the languages they learn in their youth. (…) To wipe out a language, one has to enter the home and prevent the parents from speaking their native language to their children.

Given such a preposterous scenario, we return to our question — how could this possibly happen?

One good answer is urbanization. If a Gikuyu and a Giryama meet in Nairobi, they won’t likely speak each other’s mother tongue, but they very likely will speak one or both of the trade languages in Kenya — Swahili and English. Their kids may learn a smattering of words in the heritage languages from their parents, but by the third generation any vestiges of those languages in the family will likely be gone. In other cases, extremely rural communities are drawn to the relatively easier lifestyle in cities, until sometimes entire villages are abandoned. Nor is this a recent phenomenon.

The first case of massive language die-off was probably during the Agrarian (Neolithic) Revolution, when humanity first adopted farming, abandoned the nomadic lifestyle, and created permanent settlements. As the size of these communities grew, so did the language they spoke. But throughout most of history, and still in many areas of the world today, 500 or fewer speakers per language has been the norm. Like the people who spoke them, these languages were constantly in flux. No language could grow very large, because the community that spoke it could only grow so large itself before it fragmented. The language followed suit, soon becoming two languages. Permanent settlements changed all this, and soon larger and larger populations could stably speak the same language. (…)

"In primitive times every migration causes not only geographical but also intellectual separation of clans and tribes. Economic exchanges do not yet exist; there is no contact that could work against differentiation and the rise of new customs. The dialect of each tribe becomes more and more different from the one that its ancestors spoke when they were still living together. The splintering of dialects goes on without interruption. The descendants no longer understand one other.… A need for unification in language then arises from two sides. The beginnings of trade make understanding necessary between members of different tribes. But this need is satisfied when individual middlemen in trade achieve the necessary command of language.”

Ludwig von Mises, Nation, State, and Economy (Online edition, 1919; 1983), Ludwig von Mises Institute, p. 46–47.

Thus urbanization is an important factor in language death. To be sure, the wondrous features of cities that draw immigrants — greater economies of scale, decreased search costs, increased division of labor — are all made possible with capitalism, and so in this sense languages may die for economic reasons. But this is precisely the type of language death that shouldn’t concern us (unless you’re a linguist like me), because urbanization is really nothing more than the demonstrated preferences of millions of people who wish to take advantage of all the fantastic benefits that cities have to offer.

In short, these people make the conscious choice to leave an environment where network effects and sociological benefits exist for speaking their native language, and exchange it for a greater range of economic possibilities, but where no such social benefits for speaking the language exist. If this were the only cause of language death — or even just the biggest one — then there would be little more to say about it. (…)

Far too many well-intentioned individuals are too quick to substitute their valuations for those of the last speakers of indigenous languages this way. Were it up to them, these speakers would be resigned to misery and poverty and deprived of participation in the world’s advanced economies in order that their language might be passed on. To be sure, these speakers themselves often fall victim to the mistaken ideology that one language necessarily displaces or interferes with another.

Although the South African Department of Education is trying to develop teaching materials in the local African languages, for example, many parents are pushing back; they want their children taught only in English. In Dominica, the parents go even further and refuse to even speak the local language, Patwa, to their children.[1] Were they made aware of the falsity of this notion of language displacement, perhaps they would be less quick to stop speaking their language to their children. But the decision is ultimately theirs to make, and theirs alone.

Urbanization, however, is not the only cause of language death. There is another that, I’m sad to say, almost none of the linguists who work on endangered languages give much thought to, and that is the state. The state is the only entity capable of reaching into the home and forcibly altering the process of language socialization in an institutionalized way.

How? The traditional method was simply to kill or remove indigenous and minority populations, as was done as recently as 1923 in the United States in the last conflict of the Indian War. More recently this happens through indirect means — whether intentional or otherwise — the primary method of which has been compulsory state schooling.

There is no more pernicious assault on the cultural practices of minority populations than a standardized, Anglified, Englicized compulsory education. It is not just that children are forcibly removed from the socialization process in the home, required to speak an official language and punished (often corporally) for doing otherwise. It is not just that schools redefine success, away from those things valued by the community, and towards those things that make someone a better citizen of the state. No, the most significant impact of compulsory state education is that it ingrains in children the idea that their language and their culture is worthless, of no use in the modern classroom or society, and that it is something that merely serves to set them apart negatively from their peers, as an object of their vicious torment.

But these languages clearly do have value, if for no other reason than simply because people value them. Local and minority languages are valued by their speakers for all sorts of reasons, whether it be for use in the local community, communicating with one’s elders, a sense of heritage, the oral and literary traditions of that language, or something else entirely. Again, the praxeologist is not in a position to evaluate these beliefs. The praxeologist merely notes that free choice in language use and free choice in association, one not dictated by the edicts of the state, will best satisfy the demand of individuals, whether for minority languages or lingua francas. What people find useful, they will use.

By contrast, the state values none of these things. For the state, the goal is to bind individuals to itself, to an imagined homogeneous community of good citizens, rather than their local community. National ties trump local ones in the eyes of the state. Free choice in association is disregarded entirely. And so the state forces many indigenous people to become members of a foreign community, where they are a minority and their language is scorned, as in the case of boarding schools. Whereas at home, mastering the native language is an important part of functioning in the community and earning prestige, and thus something of value, at school it becomes a black mark and a detriment. Given the prisonlike way schools are run, and how they exhibit similar intense (and sometimes dangerous) pressures from one’s peers, minority-language-speaking children would be smart to disassociate themselves as quickly as possible from their cultural heritage.

Mises himself, though sometimes falling prey to common fallacies regarding language like linguistic determinism and ethnolinguistic isomorphism, was aware of this distinction between natural language decline and language death brought on by the state. (…)

This is precisely what the Bureau of Indian Affairs accomplished by coercing indigenous children into attending boarding schools. Those children were cut off from their culture and language — their nation — until they had effectively assimilated American ideologies regarding minority languages, namely, that English is good and all else is bad.

Nor is this the only way the state affects language. The very existence of a modern nation-state, and the ideology it encompasses, is antithetical to linguistic diversity. It is predicated on the idea of one state, one nation, one people. In Nation, State, and Economy, Mises points out that, prior to the rise of nationalism in the 17th and 18th centuries, the concept of a nation did not refer to a political unit like state or country as we think of it today.

A “nation” instead referred to a collection of individuals who share a common history, religion, cultural customs and — most importantly — language. Mises even went so far as to claim that “the essence of nationality lies in language.”[2] The “state” was a thing apart, referring to the nobility or princely state, not a community of people (hence Louis XIV’s famous quip, “L’état c’est moi.”).[3] In that era, a state might consist of many nations, and a nation might subsume many states.

The rise of nationalism changed all this. As Robert Lane Greene points out in his excellent book, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity,

The old blurry linguistic borders became inconvenient for nationalists. To build nations strong enough to win themselves a state, the people of a would-be nation needed to be welded together with a clear sense of community. Speaking a minority dialect or refusing to assimilate to a standard wouldn’t do.[4]

Mises himself elaborated on this point. Despite his belief in the value of a liberal democracy, which would remain with him for the rest of his life, Mises realized early on that the imposition of democracy over multiple nations could only lead to hegemony and assimilation:

In polyglot territories, therefore, the introduction of a democratic constitution does not mean the same thing at all as introduction of democratic autonomy. Majority rule signifies something quite different here than in nationally uniform territories; here, for a part of the people, it is not popular rule but foreign rule. If national minorities oppose democratic arrangements, if, according to circumstances, they prefer princely absolutism, an authoritarian regime, or an oligarchic constitution, they do so because they well know that democracy means the same thing for them as subjugation under the rule of others.[5]

From the ideology of nationalism was also born the principle of irredentism, the policy of incorporating historically or ethnically related peoples into the larger umbrella of a single state, regardless of their linguistic differences. As Greene points out, for example,

By one estimate, just 2 or 3 percent of newly minted “Italians” spoke Italian at home when Italy was unified in the 1860s. Some Italian dialects were as different from one another as modern Italian is from modern Spanish.[6]

This in turn prompted the Italian statesman Massimo D’Agelizo (1798–1866) to say, “We have created Italy. Now we need to create Italians.” And so these Italian languages soon became yet another casualty of the nation-state.

Mises once presciently predicted that,

If [minority nations] do not want to remain politically without influence, then they must adapt their political thinking to that of their environment; they must give up their special national characteristics and their language.[7]

This is largely the story of the world’s languages. It is, as we have seen, the history of the state, a story of nationalistic furor, and of assimilation by force. Only when we abandon this socialist and utopian fantasy of one state, one nation, one people will this story begin to change.”

Danny Hieber is a linguist working to document and revitalize the world’s endangered languages, Why Do Languages Die?, Ludwig von Mises Institute, Jan 04, 2012. (Illustration: The Evolution of the Armenian Alphabet)

[1] Amy L. Paugh, Playing With Languages: Children and Change in a Caribbean Village (2012), Berghahn Books.
[2] Ludwig von Mises, Human Action: A Treatise on Economics (Scholar’s Edition, 2010) Auburn, AL: Ludwig von Mises Institute, p.37.
[3] “I am the state.”
[4] Robert Lane Greene, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity (Kindle Edition, 2011), Delacorte Press, p. 132.
[5] Mises, Nation, State, and Economy, p. 77.
[6] Greene, You Are What You Speak, p. 141.
[7] Mises, Nation, State, and Economy, p. 77.

“Isn’t language loss a good thing, because fewer languages mean easier communication among the world’s people? Perhaps, but it’s a bad thing in other respects. Languages differ in structure and vocabulary, in how they express causation and feelings and personal responsibility, hence in how they shape our thoughts. There’s no single purpose “best” language; instead, different languages are better suited for different purposes.

For instance, it may not have been an accident that Plato and Aristotle wrote in Greek, while Kant wrote in German. The grammatical particles of those two languages, plus their ease in forming compound words, may have helped make them the preeminent languages of western philosophy.

Another example, familiar to all of us who studied Latin, is that highly inflected languages (ones in which word endings suffice to indicate sentence structure) can use variations of word order to convey nuances impossible with English. Our English word order is severely constrained by having to serve as the main clue to sentence structure. If English becomes a world language, that won’t be because English was necessarily the best language for diplomacy.”

— Jared Diamond, American scientist and author, currently Professor of Geography and Physiology at UCLA, The Third Chimpanzee: The Evolution & Future of the Human Animal, Hutchinson Radius, 1991.

See also:

Lists of endangered languages, Wiki
☞ Salikoko S. Mufwene, How Languages Die (pdf), University of Chicago, 2006
☞ K. David Harrison, When Languages Die. The Extinction of the World’s Languages and the Erosion of Human Knowledge (pdf), Oxford University Press, 2007

"It is commonly agreed by linguists and anthropologists that the majority of languages spoken now around the globe will likely disappear within our lifetime. The phenomenon known as language death has started to accelerate as the world has grown smaller. "This extinction of languages, and the knowledge therein, has no parallel in human history. K. David Harrison’s book is the first to focus on the essential question, what is lost when a language dies? What forms of knowledge are embedded in a language’s structure and vocabulary? And how harmful is it to humanity that such knowledge is lost forever?"

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel, Lapidarium notes
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.

Dec
29th
Thu
permalink

Edward Glaeser: ‘Cities Are Making Us More Human’
     
Illustration: “The elevated sidewalk: How it will solve city transportation problems”, Scientific American, vol. 109 (July-Dec 1913)

"As opposed to the conventional wisdom, Harvard economist Edward Glaeser believes urbanization to be a solution to many unanswered problems, such as pollution, depression and a lack of creativity. (…)

Living around trees and living in low density areas may end being actually quite harmful for the environment, whereas living in high-rise buildings and urban core may end up being quite kind to the environment. (…)

People who live in urban apartments all typically use less electricity at home and less energy at home heating than people who live in larger suburban or rural homes. A single family detached house uses on average 83% more electricity than urban apartments do within the United States. (…)

Q: How are cities making us smarter?

Glaeser: I think the most important thing cities do today is to allow the creation of new ideas. Chains of collaborative brilliance have always been responsible for human kind’s greatest hits. We have seen this in cities for millennia – Socrates and Plato bickered on an Athenian street corner; we saw it again in Florence with the ideas that went from Brunelleschi to Donatello to Masaccio to Filippino Lippi and to the Florentine Renaissance. It helps us to know each other, learn from each other and to collectively create something great. In some sense, cities are making us more human.

Our greatest asset as a species is the ability to learn from the people around us. We come out of the womb with this remarkable ability to take in information from those people – parents, peers, teachers – that are near us. Cities enable us to get smart by being around other smart people. I think this explains why cities have not become obsolete over the past thirty years. (…) We have just crossed the half-way point where more than 50% of humanity lives in cities. (…)

                   Source: Ethan Zuckerman, Desperately Seeking Serendipity, 12.V.2011

These facts are related to the role cities play today, a role very much tied to the generation of information. Globalization and new technologies did make the industrial city obsolete, at least in the West. But they also increased the idea of returns of human capital and innovation. You could sell something on the other side of the planet because you could produce it on the other side of the planet. By making knowledge more valuable, they made cities more important. That is why they continue to play the incredibly important role of connecting people, enabling them to learn from one another at close distances. (…)

I also want to emphasize that cities are often places of significant and often positive political change. One thing that those countries need is political change, which is much more likely to come out of an organized urban group than it is to come from a dispersed agricultural population. (…)

If you compare countries that are more than 50% urbanized with countries that are less than 50% urbanized, incomes are five times higher in the more urbanized countries and infant mortality rates are less than a third in the more urbanized countries. The path of rural poverty really is awful. (…)”

Edward Glaeser, economist at Harvard University, "Cities Are Making Us More Human", The European, 20.12.2011.

Dec
17th
Sat
permalink

Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers


A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity



Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity, TED.com, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Oct
26th
Wed
permalink

Researchers find a country’s wealth correlates with its collective knowledge

"What causes the large gap between rich and poor countries has been a long-debated question. Previous research has found some correlation between a nation’s economic prosperity and factors such as how the country is governed, the average amount of formal education each individual receives, and the country’s overall competiveness. But now a team of researchers from Harvard and MIT has discovered that a new measure based on a country’s collective knowledge can account for the enormous income differences between the nations of the world better than any other factor. (…)

A country’s economy can be measured by a factor they call “economic complexity.” From this perspective, the more diverse and specialized jobs a country’s citizens have, the greater the country’s ability to produce complex products that few other countries can produce, making the country more prosperous.

“The total amount of knowledge embedded in a hunter-gatherer society is not very different from that which is embedded in each one of its members,” the researchers write in their book. “The secret of modern societies is not that each person holds much more productive knowledge than those in a more traditional society. The secret to modernity is that we collectively use large volumes of knowledge, while each one of us holds only a few bits of it. Society functions because its members form webs that allow them to specialize and share their knowledge with others.” (…)

Getting poorer countries to begin producing more complex products is not as simple as offering individuals a formal education in which they learn facts and figures - what the authors refer to as “explicit” knowledge. Instead, the most productive knowledge is the “tacit” kind (for example, how to run a business), which is much harder to teach. For this reason, countries tend to expand their production capabilities by moving from the products they already produce to others that require a similar set of embedded knowledge capabilities.”

— Lisa Zyga, Researchers find a country’s wealth correlates with its collective knowledge, Physorg, Oct 26, 2011 Illustration: This network shows the product space of the US. Image credit: The Atlas of Economic Complexity

“The essential theory … is that countries grow based on the knowledge of making things,” Mr. Hausmann said in a phone interview. “It’s not years of schooling. It’s what are the products that you know how to make. And what drives growth is the difference between how much knowledge you have and how rich you are.”

Thus, nations with extensive productive knowledge but relatively little wealth haven’t met their potential, and will eventually catch up, Mr. Hausmann said. Those countries will experience the most growth through 2020, according to the report.

That bodes well for China, which tops the list of expected growth in per-capita gross domestic product. According to the method outlined in the report, China’s growth in GDP per capita will be 4.32% though 2020. India and Thailand are second and third, respectively.

The U.S., however, is ranked 91, with expected growth in per-capita GDP at 2.01%. “The U.S. is very rich already and has a lot of productive knowledge, but it doesn’t have an excess of productive knowledge relative to its income,” Mr. Hausmann said.

The method, when applied to the years 1999-2009, proved to be much more accurate at predicting future growth than any other existing methods, including the World Economic Forum’s Global Competitiveness Index, according to the report.”

— Josh Mitchell, ‘Complexity’ Predicts Nations’ Future Growth, The Wall Street Journal, Oct 26, 2011

See also:

"The Atlas of Economic Complexity” (pdf). The 364-page report, a study led by Harvard’s Ricardo Hausmann and MIT’s Cesar A. Hidalgo, is the culmination of nearly five years of research by a team of economists at Harvard’s Center for International Development.
Economic inequality, Wiki
☞ Heiner Rindermann and James Thompson, Cognitive Capitalism: The Effect of Cognitive Ability on Wealth, as Mediated Through Scientific Achievement and Economic Freedom (pdf), Chemnitz University of Technology, University College London, 2011.

"Traditional economic theories stress the relevance of political, institutional, geographic, and historical factors for economic growth. In contrast, human-capital theories suggest that peoples’ competences, mediated by technological progress, are the deciding factor in a nation’s wealth. Using three large-scale assessments, we calculated cognitive-competence sums for the mean and for upper- and lower-level groups for 90 countries and compared the influence of each group’s intellectual ability on gross domestic product. In our cross-national analyses, we applied different statistical methods (path analyses, bootstrapping) and measures developed by different research groups to various country samples and historical periods.

Our results underscore the decisive relevance of cognitive ability—particularly of an intellectual class with high cognitive ability and accomplishments in science, technology, engineering, and math—for national wealth. Furthermore, this group’s cognitive ability predicts the quality of economic and political institutions, which further determines the economic affluence of the nation. Cognitive resources enable the evolution of capitalism and the rise of wealth.”

Oct
25th
Tue
permalink

Iain McGilchrist on The Divided Brain and the Making of the Western World

                             

"Just as the human body represents a whole museum of organs, with a long evolutionary history behind them, so we should expect the mind to be organized in a similar way. (…) We receive along with our body a highly differentiated brain which brings with it its entire history, and when it becomes creative it creates out of this history – out of the history of mankind (…) that age-old natural history which has been transmitted in living form since the remotest times, namely the history of the brain structure."

Carl Jung cited in The Master and His Emissary, Yale University Press, 2009, p.8.

Renowned psychiatrist and writer Iain McGilchrist explains how the ‘divided brain’ has profoundly altered human behaviour, culture and society. He draws on a vast body of recent experimental brain research to reveal that the differences between the brain’s two hemispheres are profound.

The left hemisphere is detail-oriented, prefers mechanisms to living things, and is inclined to self-interest. It misunderstands whatever is not explicit, lacks empathy and is unreasonably certain of itself, whereas the right hemisphere has greater breadth, flexibility and generosity, but lacks certainty.

It is vital that the two hemispheres work together, but McGilchrist argues that the left hemisphere is increasingly taking precedence in the modern world, resulting in a society where a rigid and bureaucratic obsession with structure and self-interest hold sway.

RSA, 17th Nov 2010

Iain McGilchrist points out that the idea that “reason [is] in the left hemisphere and something like creativity and emotion [are] in the right hemisphere” is an unhelpful misconception. He states that “every single brain function is carried out by both hemispheres. Reason and emotion and imagination depend on the coming together of what both hemispheres contribute.” Nevertheless he does see an obvious dichotomy, and asks himself: “if the brain is all about making connections, why is it that it’s evolved with this whopping divide down the middle?”

Natasha Mitchell, "The Master and his Emissary: the divided brain and the reshaping of Western civilisation", 19 June 2010

      

"The author holds instead that each of the hemispheres of the brain has a different “take” on the world or produces a different “version” of the world, though under normal circumstances these work together. This, he says, is basically to do with attention. He illustrates this with the case of chicks which use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, “The left hemisphere has its own agenda, to manipulate and use the world”; its world view is essentially that of a mechanism. The right has a broader outlook, “has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values.”

Staff, "Two worlds of the left and right brain (audio podcast)", BBC Radio 4, 14 November 2009

McGilchrist explains this more fully in a later interview for ABC Radio National’s All in the Mind programme, stating: “The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways—-in order to be able to use what it understands of the world and to be able to manipulate the world—-it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain” [the left hemisphere]. Though he sees this as an essential “double act”, McGilchrist points to the problem that the left hemisphere has a “narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful” and to the problem of the left hemisphere’s lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.

How the brain has shaped our world

"The author describes the evolution of Western culture, as influenced by hemispheric brain functioning, from the ancient world, through the Renaissance and Reformation; the Enlightenment; Romanticism and Industrial Revolution; to the modern and postmodern worlds which, to our detriment, are becoming increasingly dominated by the left brain. According to McGilchrist, interviewed for ABC Radio National’s All in the Mind programme, rather than seeking to explain the social and cultural changes and structure of civilisation in terms of the brain — which would be reductionist — he is pointing to a wider, more inclusive perspective and greater reality in which there are two competing ways of thinking and being, and that in modern Western society we appear increasingly to be able to only entertain one viewpoint: that of the left hemisphere.

The author argues that the brain and the mind do not simply experience the world, but that the world we experience is a product or meeting of that which is outside us with our mind. The outcome, the nature of this world, is thus dependent upon “which mode of attention we bring to bear on the world

McGilchrist sees an occasional flowering of "the best of the right hemisphere and the best of the left hemisphere working together" in our history: as witnessed in Athens in the 6th century by activity in the humanities and in science and in ancient Rome during the Augustan era. However, he also sees that as time passes, the left hemisphere once again comes to dominate affairs and things slide back into “a more theoretical and conceptualised abstracted bureaucratic sort of view of the world. According to McGilchrist, the cooperative use of both left and right hemispheres diminished and became imbalanced in favour of the left in the time of the classical Greek philosophers Parmenides and Plato and in the late classical Roman era. This cooperation and openness were regained during the Renaissance 1,000 years later which brought “sudden efflorescence of creative life in the sciences and the arts”. However, with the Reformation, the early Enlightenment, and what has followed as rationalism has arisen, our world has once again become increasingly rigid, simplified and rule-bound.

Looking at more recent Western history, McGilchrist sees in the Industrial Revolution that for the first time artefacts were being made “very much to the way the left hemisphere sees the world — simple solids that are regular, repeated, not individual in the way that things that are made by hand are” and that a transformation of the environment in a similar vein followed on from that; that what was perceived inwardly was projected outwardly on a mass scale. The author argues that the scientific materialism which developed in the 19th century is still with us, at least in the biological sciences, though he sees physics as having moved on.

McGilchrist does not see modernism and postmodernism as being in opposition to this, but also “symptomatic of a shift towards the left hemisphere’s conception of the world”, taking the idea that there is no absolute truth and turning that into “there is no truth at all”, and he finds some of the movements’ works of art “symptomatic of people whose right hemisphere is not working very well.” McGilchrist cites the American psychologist Louis Sass, author of Madness and Modernism, pointing out that Sass “draws extensive parallels between the phenomena of modernism and postmodernism and of schizophrenia”, with things taken out of context and fragmented.”

The Master and His Emissary, Wiki

The Master and His Emissary

Whatever the relationship between consciousness and the brainunless the brain plays no role in bringing the world as we experience it into being, a position that must have few adherents – its structure has to be significant. It might even give us clues to understanding the structure of the world it mediates, the world we know. So, to ask a very simple question, why is the brain so clearly and profoundly divided? Why, for that matter, are the two cerebral hemispheres asymmetrical? Do they really differ in any important sense? If so, in what way? (…)

Enthusiasm for finding the key to hemisphere differences has waned, and it is no longer respectable for a neuroscientist to hypothesise on the subject. (…)

These beliefs could, without much violence to the facts, be characterised as versions of the idea that the left hemisphere is somehow gritty, rational, realistic but dull, and the right hemisphere airy-fairy and impressionistic, but creative and exciting; a formulation reminiscent of Sellar and Yeatman’s immortal distinction (in their parody of English history teaching, 1066 and All That) between the Roundheads – ‘Right and Repulsive’ – and the Cavaliers – ‘Wrong but Wromantic’. In reality, both hemispheres are crucially involved in reason, just as they are in language; both hemispheres play their part in creativity. Perhaps the most absurd of these popular misconceptions is that the left hemisphere, hard-nosed and logical, is somehow male, and the right hemisphere, dreamy and sensitive, is somehow female. (…)

V. S. Ramachandran, another well-known and highly regarded neuroscientist, accepts that the issue of hemisphere difference has been traduced, but concludes: ‘The existence of such a pop culture shouldn’t cloud the main issue – the notion that the two hemispheres may indeed be specialised for different functions. (…)

I believe there is, literally, a world of difference between the hemispheres. Understanding quite what that is has involved a journey through many apparently unrelated areas: not just neurology and psychology, but philosophy, literature and the arts, and even, to some extent, archaeology and anthropology. (…)

I have come to believe that the cerebral hemispheres differ in ways that have meaning. There is a plethora of well-substantiated findings that indicate that there are consistent differences – neuropsychological, anatomical, physiological and chemical, amongst others – between the hemispheres. But when I talk of ‘meaning’, it is not just that I believe there to be a coherent pattern to these differences. That is a necessary first step. I would go further, however, and suggest that such a coherent pattern of differences helps to explain aspects of human experience, and therefore means something in terms of our lives, and even helps explain the trajectory of our common lives in the Western world.

My thesis is that for us as human beings there are two fundamentally opposed realities, two different modes of experience; that each is of ultimate importance in bringing about the recognisably human world; and that their difference is rooted in the bihemispheric structure of the brain. It follows that the hemispheres need to co-operate, but I believe they are in fact involved in a sort of power struggle, and that this explains many aspects of contemporary Western culture. (…)

The brain has evolved, like the body in which it sits, and is in the process of evolving. But the evolution of the brain is different from the evolution of the body. In the brain, unlike in most other human organs, later developments do not so much replace earlier ones as add to, and build on top of, them. Thus the cortex, the outer shell that mediates most so-called higher functions of the brain, and certainly those of which we are conscious, arose out of the underlying subcortical structures which are concerned with biological regulation at an unconscious level; and the frontal lobes, the most recently evolved part of the neocortex, which occupy a much bigger part of the brain in humans than in our animal relatives, and which grow forwards from and ‘on top of ’ the rest of the cortex, mediate most of the sophisticated activities that mark us out as human – planning, decision making, perspective taking, self-control, and so on. In other words, the structure of the brain reflects its history: as an evolving dynamic system, in which one part evolves out of, and in response to, another. (…)

There is after all coherence to the way in which the correlates of our experience are grouped and organised in the brain, and we can see these ‘functions’ forming intelligible wholes, corresponding to areas of experience, and see how they relate to one another at the brain level, this casts some light on the structure and experience of our mental world. In this sense the brain is – in fact it has to be – a metaphor of the world. (…)

I believe that there are two fundamentally opposed realities rooted in the bihemispheric structure of the brain. But the relationship between them is no more symmetrical than that of the chambers of the heart – in fact, less so; more like that of the artist to the critic, or a king to his counsellor.

There is a story in Nietzsche that goes something like this. There was once a wise spiritual master, who was the ruler of a small but prosperous domain, and who was known for his selfless devotion to his people. As his people flourished and grew in number, the bounds of this small domain spread; and with it the need to trust implicitly the emissaries he sent to ensure the safety of its ever more distant parts. It was not just that it was impossible for him personally to order all that needed to be dealt with: as he wisely saw, he needed to keep his distance from, and remain ignorant of, such concerns. And so he nurtured and trained carefully his emissaries, in order that they could be trusted. Eventually, however, his cleverest and most ambitious vizier, the one he most trusted to do his work, began to see himself as the master, and used his position to advance his own wealth and influence. He saw his master’s temperance and forbearance as weakness, not wisdom, and on his missions on the master’s behalf, adopted his mantle as his own – the emissary became contemptuous of his master. And so it came about that the master was usurped, the people were duped, the domain became a tyranny; and eventually it collapsed in ruins.

The meaning of this story is as old as humanity, and resonates far from the sphere of political history. I believe, in fact, that it helps us understand something taking place inside ourselves, inside our very brains, and played out in the cultural history of the West, particularly over the last 500 years or so. (…)

I hold that, like the Master and his emissary in the story, though the cerebral hemispheres should co-operate, they have for some time been in a state of conflict. The subsequent battles between them are recorded in the history of philosophy, and played out in the seismic shifts that characterise the history of Western culture. At present the domain – our civilisation – finds itself in the hands of the vizier, who, however gifted, is effectively an ambitious regional bureaucrat with his own interests at heart. Meanwhile the Master, the one whose wisdom gave the people peace and security, is led away in chains. The Master is betrayed by his emissary.”

Iain McGilchrist, psychiatrist and writer, The Master and His Emissary, Yale University Press, 2009 Illustrations: 1), 2) Shalmor Avnon Amichay/Y&R Interactive

Iain McGilchrist: The Divided Brain | RSA animated

RSA, 17th Nov 2010

See also:

☞ Iain McGilchrist, The Battle Between the Brain’s Left and Right Hemispheres, WSJ.com, Jan 2, 2010
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Mind and Brain tag on Lapidarium notes