Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso



Age of information
Artificial intelligence
Cognition, perception, relativity
Cognitive science
Collective intelligence
Human being
Mind & Brain
Science & Art
Self improvement
The other


A Box Of Stories
Reading Space




Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Ideas just aren’t what they used to be. Once upon a time, they could ignite fires of debate, stimulate other thoughts, incite revolutions and fundamentally change the ways we look at and think about the world.

They could penetrate the general culture and make celebrities out of thinkers — notably Albert Einstein, but also Reinhold Niebuhr, Daniel Bell, Betty Friedan, Carl Sagan and Stephen Jay Gould, to name a few. The ideas themselves could even be made famous: for instance, for “the end of ideology,” “the medium is the message,” “the feminine mystique,” “the Big Bang theory,” “the end of history.” A big idea could capture the cover of Time — “Is God Dead?” — and intellectuals like Norman Mailer, William F. Buckley Jr. and Gore Vidal would even occasionally be invited to the couches of late-night talk shows. How long ago that was. (…)

If our ideas seem smaller nowadays, it’s not because we are dumber than our forebears but because we just don’t care as much about ideas as they did. In effect, we are living in an increasingly post-idea world — a world in which big, thought-provoking ideas that can’t instantly be monetized are of so little intrinsic value that fewer people are generating them and fewer outlets are disseminating them, the Internet notwithstanding. Bold ideas are almost passé.

It is no secret, especially here in America, that we live in a post-Enlightenment age in which rationality, science, evidence, logical argument and debate have lost the battle in many sectors, and perhaps even in society generally, to superstition, faith, opinion and orthodoxy. While we continue to make giant technological advances, we may be the first generation to have turned back the epochal clock — to have gone backward intellectually from advanced modes of thinking into old modes of belief. But post-Enlightenment and post-idea, while related, are not exactly the same.

Post-Enlightenment refers to a style of thinking that no longer deploys the techniques of rational thought. Post-idea refers to thinking that is no longer done, regardless of the style. (…)

There is the retreat in universities from the real world, and an encouragement of and reward for the narrowest specialization rather than for daring — for tending potted plants rather than planting forests.

There is the eclipse of the public intellectual in the general media by the pundit who substitutes outrageousness for thoughtfulness, and the concomitant decline of the essay in general-interest magazines. And there is the rise of an increasingly visual culture, especially among the young — a form in which ideas are more difficult to express. (…)

We live in the much vaunted Age of Information. Courtesy of the Internet, we seem to have immediate access to anything that anyone could ever want to know. We are certainly the most informed generation in history, at least quantitatively. There are trillions upon trillions of bytes out there in the ether — so much to gather and to think about.

And that’s just the point. In the past, we collected information not simply to know things. That was only the beginning. We also collected information to convert it into something larger than facts and ultimately more useful — into ideas that made sense of the information. We sought not just to apprehend the world but to truly comprehend it, which is the primary function of ideas. Great ideas explain the world and one another to us.

Marx pointed out the relationship between the means of production and our social and political systems. Freud taught us to explore our minds as a way of understanding our emotions and behaviors. Einstein rewrote physics. More recently, McLuhan theorized about the nature of modern communication and its effect on modern life. These ideas enabled us to get our minds around our existence and attempt to answer the big, daunting questions of our lives.

But if information was once grist for ideas, over the last decade it has become competition for them. We are like the farmer who has too much wheat to make flour. We are inundated with so much information that we wouldn’t have time to process it even if we wanted to, and most of us don’t want to.

The collection itself is exhausting: what each of our friends is doing at that particular moment and then the next moment and the next one; who Jennifer Aniston is dating right now; which video is going viral on YouTube this hour; what Princess Letizia or Kate Middleton is wearing that day. In effect, we are living within the nimbus of an informational Gresham’s law in which trivial information pushes out significant information, but it is also an ideational Gresham’s law in which information, trivial or not, pushes out ideas.

We prefer knowing to thinking because knowing has more immediate value. It keeps us in the loop, keeps us connected to our friends and our cohort. Ideas are too airy, too impractical, too much work for too little reward. Few talk ideas. Everyone talks information, usually personal information. Where are you going? What are you doing? Whom are you seeing? These are today’s big questions.

It is certainly no accident that the post-idea world has sprung up alongside the social networking world. Even though there are sites and blogs dedicated to ideas, Twitter, Facebook, Myspace, Flickr, etc., the most popular sites on the Web, are basically information exchanges, designed to feed the insatiable information hunger, though this is hardly the kind of information that generates ideas. It is largely useless except insofar as it makes the possessor of the information feel, well, informed. Of course, one could argue that these sites are no different than conversation was for previous generations, and that conversation seldom generated big ideas either, and one would be right. (…)

An artist friend of mine recently lamented that he felt the art world was adrift because there were no longer great critics like Harold Rosenberg and Clement Greenberg to provide theories of art that could fructify the art and energize it. Another friend made a similar argument about politics. While the parties debate how much to cut the budget, he wondered where were the John Rawls and Robert Nozick who could elevate our politics.

One could certainly make the same argument about economics, where John Maynard Keynes remains the center of debate nearly 80 years after propounding his theory of government pump priming. This isn’t to say that the successors of Rosenberg, Rawls and Keynes don’t exist, only that if they do, they are not likely to get traction in a culture that has so little use for ideas, especially big, exciting, dangerous ones, and that’s true whether the ideas come from academics or others who are not part of elite organizations and who challenge the conventional wisdom. All thinkers are victims of information glut, and the ideas of today’s thinkers are also victims of that glut.

But it is especially true of big thinkers in the social sciences like the cognitive psychologist Steven Pinker, who has theorized on everything from the source of language to the role of genetics in human nature, or the biologist Richard Dawkins, who has had big and controversial ideas on everything from selfishness to God, or the psychologist Jonathan Haidt, who has been analyzing different moral systems and drawing fascinating conclusions about the relationship of morality to political beliefs. But because they are scientists and empiricists rather than generalists in the humanities, the place from which ideas were customarily popularized, they suffer a double whammy: not only the whammy against ideas generally but the whammy against science, which is typically regarded in the media as mystifying at best, incomprehensible at worst. A generation ago, these men would have made their way into popular magazines and onto television screens. Now they are crowded out by informational effluvium.

No doubt there will be those who say that the big ideas have migrated to the marketplace, but there is a vast difference between profit-making inventions and intellectually challenging thoughts. Entrepreneurs have plenty of ideas, and some, like Steven P. Jobs of Apple, have come up with some brilliant ideas in the “inventional” sense of the word.

Still, while these ideas may change the way we live, they rarely transform the way we think. They are material, not ideational. It is thinkers who are in short supply, and the situation probably isn’t going to change anytime soon.

We have become information narcissists, so uninterested in anything outside ourselves and our friendship circles or in any tidbit we cannot share with those friends that if a Marx or a Nietzsche were suddenly to appear, blasting his ideas, no one would pay the slightest attention, certainly not the general media, which have learned to service our narcissism.

What the future portends is more and more information — Everests of it. There won’t be anything we won’t know. But there will be no one thinking about it.

Think about that.”

Neal Gabler, a professor, journalist, author, film critic and political commentator, The Elusive Big Idea, The New York Times, August 14, 2011.

See also:

☞ The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Mark Pagel, Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers, Edge, Lapidarium, Dec 16, 2011
The Paradox of Contemporary Cultural History. We are clinging as never before to the familiar in matters of style and culture


Republic of Letters ☞ Exploring Correspondence and Intellectual Community in the Early Modern Period (1500-1800)

                                            The Republic of Letters

"Despite the wars and despite different religions. All the sciences, all the arts, thus received mutal assistance in this way: the academies formed this republic. (…) True scholars in each field drew closer the bonds of this great society of minds, spread everywhere and everywhere independent. This correspondence still remains; it is one of the consolations for the evils that ambition and politics spread across the Earth."

Voltaire, Le Siècle de Louis XIV cited in Dena Goodman, The Republic of letters: a cultural history of the French enlightenment, Cornell University Press, 1996, p. 20

Republic of Letters (Respublica literaria) is most commonly used to define intellectual communities in the late 17th and 18th century in Europe and America. It especially brought together the intellectuals of Age of Enlightenment, or “philosophes" as they were called in France. The Republic of Letters emerged in the 17th century as a self-proclaimed community of scholars and literary figures that stretched across national boundaries but respected differences in language and culture. These communities that transcended national boundaries formed the basis of a metaphysical Republic. (…)

As is evident from the term, the circulation of handwritten letters was necessary for its function because it enabled intellectuals to correspond with each other from great distances. All citizens of the 17th century Republic of Letters corresponded by letter, exchanged published papers and pamphlets, and considered it their duty to bring others into the Republic through the expansion of correspondence.” - (Wiki)

"[They] organized itself around cultural institutions (e. g. museums, libraries, academies) and research projects that collected, sorted, and dispersed knowledge. A pre-disciplinary community in which most of the modern disciplines developed, it was the ancestor to a wide range of intellectual societies from the seventeenth-century salons and eighteenth-century coffeehouses to the scientific academy or learned society and the modern research university.

Forged in the humanist culture of learning that promoted the ancient ideal of the republic as the place for free and continuous exchange of knowledge, the Republic of Letters was simultaneously an imagined community (a scholar’s utopia where differences, in theory, would not matter), an information network, and a dynamic platform from which a wide variety of intellectual projects – many of them with important ramifications for society, politics, and religion – were proposed, vetted, and executed. (…)

The Republic of Letters existed for almost four hundred years. Its scope encompassed all of Europe, but reached well beyond this region as western Europeans had more regular contact with and presence in Russia, Asia, Africa, and the Americas. In the sixteenth and seventeenth century merchants and missionaries helped to create global information networks and colonial outposts that transformed the geography of the Republic of Letters. By the eighteenth century we can speak of a trans-Atlantic republic of letters shaped by central figures such as Franklin and many others, north and south, who wrote and traveled across the Atlantic.”

"Recent scholarship has established that intellectuals across Europe came to see themselves, in the sixteenth, seventeenth and eighteenth centuries, as citizens of a transnational intellectual society—a Republic of Letters in which speech was free, rank depended on ability and achievement rather than birth, and scholars, philosophers and scientists could find common ground in intellectual inquiry even if they followed different faiths and belonged to different nations.”

— Anthony Grafton, Republic of Letters introduction, Stanford University

Republic of Letters Project

                                          (click image to explore)

Researchers map thousands of letters exchanged in the 18th century’s “Republic of Letters” and learn at a glance what it once took a lifetime of study to comprehend.

Mapping the Republic of Letters, Stanford University

See also:

☞ Dena Goodman, The Republic of letters: a cultural history of the French enlightenment, Cornell University Press, 1996
☞ April Shelford, Transforming the republic of letters: Pierre-Daniel Huet and European intellectual life, 1650-1720, University Rochester Press, 2007
New social media? Same old, same old, say Stanford experts, Stanford University News, Nov 2, 2011.
☞ Cynthia Haven, Hot new social media maybe not so new: plus ça change, plus c’est la même chose, Stanford University The Book Haven, Nov 2, 2011


In an era of global interconnectedness, what is the nature of cross-cultural exchange?


"If you look at the world through a multicultural lens, you realize that that whole idea of exploration is a 19th century concept that has no meaning any more. I think that anthropology actually began in a beautiful way, which was that by studying another culture, albeit the exotic other, you could learn something about your common humanity and about humanity in general. Then it was very quickly co-opted by the ideology of its time and the anthropological lens was used to rationalize distinctions of class and race. Culture came to be seen as a set of frozen moments in time in some imagined evolutionary progression that of course inevitably placed Victorian Europe at the apex and sloped down to the so-called primitives of the world. That idea is now completely irrelevant, but that is not to say that there can be no explorations of spirit and of culture. Rather, all of life is an exploration of new paradigms of thought.

Q: So exploration becomes an intellectual rather than a physical phenomena?

One of the most exciting explorations of our time has come from the realm of genetics. We have literally proven to be true what the philosophers always hoped, which is that we are all connected, all brothers and sisters. Not in the spirit of some hippie cliché, but quite literally: we are cut from the same genetic cloth. We always say Americans are so culturally myopic. Actually, all peoples are. The names of many Indian tribes translate to “the people—” the implication being that everybody else is a savage. And that ethnocentric point of view was what almost all cultures celebrated throughout history. But if you now accept that all populations are descended from this handful of people that walked out of Africa 65,000 years ago, you have to also accept that they all fundamentally share the same raw intellectual capacity, the same genius. How that genius is expressed is simply a matter of choice and cultural orientation. Suddenly you see there is no progression of culture; there is a series of options. It’s not that other peoples are simply failed attempts at being us, or that they’ve missed the train of history by not being like us. No, they are unique answers to a fundamental question—what does it mean to be human? When we stop thinking of ourselves as the paragon of humanity’s potential, it also liberates us from this conceit that we’re on a train to disaster. You realize what we are is just one option, rooted in a relatively shallow past of just 300 years of industrial activity, and that these other peoples offer not some road map to where we should go but a suggestion that there are other ways of living. Those kind of intellectual revelations that are the outcome of intellectual exploration are every bit as valid as discovering a new continent. Exploring how we’re going to all live on this planet—that’s one we all need to be a part of.

Q: We now have instant access to images and voices from around the world. How has that changed the nature of cross-cultural encounters?

Whatever our notion of culture may have been when societies lived as isolates is long gone, and we’re moving towards a world where the issue isn’t modern versus traditional, but just the rights of free people to choose the components of their lives. How can we find a way that people can have access to the benefits of modernity, without that engagement demanding the death of their culture? For one thing, when people lose the conditions and roots of their traditions, it is simply geopolitically unstable. My objection to “the world is flat” theory is that it implies that the world we’re all melting down to is our world. And that’s just not true. It’s going to be a more interconnected world, and its going be a world that will be the consequence of all our own imaginings, but I would not want it to be, and it won’t be, just everybody melting down to being like us. (…)

Q: You’ve written about zombies, witch-doctors, and religion the world over. Where do you see magic in the contemporary world?

It’s not magic so much as metaphor. (…) In the Andes of Peru, a kid really does believe that a mountain is an acting spirit that will direct his destiny. That doesn’t mean he’s living in some la-la-land or fantasy; it means he has a solid sense of the earth being actually what we know it to be—a source of life, a source of food. The most important consequence is not whether the belief is true or not, but how it affects how people treat that mountain. If you think it’s divine you’re not going to blow it up.

Q: You’re advocating a kind of pragmatist environmentalist philosophy.

Exactly. People like to say indigenous people are closer to the earth, like some Rousseauvian ideal. That misses the whole point. I was raised to believe that the forests of British Columbia existed to be cut. That was the foundation of the ideology of what we called scientific forestry. Which was a total construct. It was a slogan; it wasn’t science. Yet we didn’t believe the earth had any resonance to us beyond board-feet cellulose. That is different from a kid from a tribe who had to go into those forests and confront animal spirits to bring back the wisdom to the potlatch. And again, it doesn’t matter whether that forest is the den of demons or just cellulose, that’s not the interesting question. It’s how the belief system affects the ecological footprint of people. So when I talk about magic, it’s really about metaphor. Metaphor is not air, not fluff. People always ask me, do I believe zombies are real? It’s like asking me do I believe Jesus Christ is real. Do I believe there was a guy who walked on water? I probably don’t. Do I believe that this phenomena of Jesus Christ influences behavior on a daily basis? Of course I do! I can’t tell you if there are zombies walking around Haiti but I can tell you that the idea of zombies influences behavior in Haiti. We cling to our rationality. We liberated ourselves from a certain trap and we’re reluctant to go anywhere back towards it. It’s one of the reasons we are all so confused. If you think of the pace of change that people are expected to absorb, and deal with it’s pretty daunting.”

Wade Davis, Canadian anthropologist, ethnobotanist, "If You Think Something Is Divine, You’re Not Going To Blow It Up", The European, 07.03.2011 (Illustration source)

See also:

Zygmunt Bauman: Europe’s task consists of passing on to all the art of everyone learning from everyone


Stewart Brand: ‘Look At the World Through the Eyes Of A Fool’


Q: Has society become too eager to discard things and ideas?

(…) I think we have become too shortsighted. Everything is moving faster, everybody is multitasking. Investments are made for short-term returns, democracies run on short-term election cycles. Speedy progress is great, but it is also chancy. When everything is moving fast, the future looks like it is next week. But what really counts is the future ten or hundred years from now. And we should also bear in mind that the history that matters is not only yesterday’s news but events from a decade or a century or a millennium ago. To balance that, we want to look at the long term: the last ten thousand years, the next ten thousand years. (…)

When NASA released the first photographs of the earth from space in the 1960s, people changed their frame of reference. We began to think differently about the earth, about our environment, about humanity. (…)

There had been many drawings of the earth from space, just like people made images of cities from above before we had hot-air balloons. But they were all wrong. Usually, images of the earth did not include any clouds, no weather, no climate. They also tended to neglect the shadow that much of the earth is usually in. From most angles, the earth appears as a crescent. Only when the sun is directly behind you would you see the whole planet brightly illuminated against the blackness of space. (…)

The question of framing

I think there is always the question of framing: How do we look at things? The first photos of the earth changed the frame. We began to talk more about “humans” and less about Germans or Americans. We began to start talking about the planet as a whole. That, in a way, gave us the ability to think about global problems like climate change. We did not have the idea of a global solution before. Climate Change is a century-sized problem. Never before has humanity tried to tackle something on such a long temporal scale. Both the large scale and the long timeframe have to be taken seriously.

Q: Do you believe in something like a human identity?

In a way, the ideal breakthrough would be to discover alien life. That would give us a clear sense of our humanity. But even without that, we have done pretty well in stepping outside our usual frame of reference and looking at the planet and at the human race from the outside. That’s nice. I would prefer if we didn’t encounter alien intelligence for a while. (…)

Q: So we have to improve the extrapolations and predictions that we make based on present data sets?

We like to think that we are living in a very violent time, that the future looks dark. But the data says that violence has declined every millennium, every century, every decade. The reduction in cruelty is just astounding. So we should not focus too much on the violence that has marked the twentieth century. The interesting question is how we can continue that trend of decreasing violence into the future. What options are open to us to make the world more peaceful? Those are data-based questions. (…)

Q: When you started to publish the Whole Earth Catalogue in 1968, you said that you wanted to create a database so that “anyone on Earth can pick up a telephone and find out the complete information on anything.” Is that the idea of the internet, before the internet?

Right, I had forgotten about that quote. Isn’t it nice that I didn’t have to go through the work of collecting that information, it just happened organically. Some people say to me that I should revive the catalogue and my answer is: The internet is better than any catalogue or encyclopedia could ever be. (…)

I don’t think the form determines the triviality of information or the level of discussion. By having much more opportunities and much lower costs of online participation, we are in a position to really expand and improve those discourses. (…)

When Nicholas Negroponte said a few years ago that every child in the world needed a laptop computer, he was right. Many people were skeptical of his idea, but they have been proven wrong. When you give internet access to people in the developing world, they immediately start forming educational networks. They expand their horizons, children teach their parents how to read and write. (…)

Q: On the back cover of the 1974 Whole Earth Catalogue, it said something similar: “Stay hungry, stay foolish”. Why?

It proposes that a beginner’s mind is the way to look at new things. We need a combination of confidence and of curiosity. It is a form of deep-seated opportunism that goes to the core of our nature and is very optimistic. I haven’t been killed by my foolishness yet, so let’s keep going, let’s take chances. The phrase expresses that our knowledge is always incomplete, and that we have to be willing to act on imperfect knowledge. That allows you to open your mind and explore. It means putting aside the explanations provided by social constructs and ideologies.

I really enjoyed your interview with Wade Davis. He makes a persuasive case for allowing native cultures to keep their cultures intact. That’s the idea behind the Rosetta Project as well. Most Americans are limited by the fact that they only speak one language. Being multilingual is a first step to being more aware of different perspectives on the world. We should expand our cognitive reach. I think there are many ways to do that: Embrace the internet. Embrace science. Travel a lot. Learn about people who are unlike yourself. I spent much of my twenties with Native American tribes, for example. You miss a lot of important stuff if you only follow the beaten path. If you look at the world through the eyes of a fool, you will see more. But I probably hadn’t thought about all of this back in 1974. It was a very countercultural move.

Q: In politics, we often talk about policies that supposedly have no rational alternative. Is that a sign of the stifling effects of ideology?

Ideologies are stories we like to tell ourselves. That’s fine, as long as we remember that they are stories and not accurate representations of the world. When the story gets in the way of doing the right thing, there is something wrong with the story. Many ideologies involve the idea of evil: Evil people, evil institutions, et cetera. Marvin Minsky has once said to me that the only real evil is the idea of evil. Once you let that go, the problems become manageable. The idea of pragmatism is that you go with the things that work and cast aside lovely and lofty theories. No theory can be coherent and comprehensive enough to provide a direct blueprint for practical actions. That’s the idea of foolishness again: You work with imperfect theories, but you don’t base your life on them.

Q: So “good” is defined in terms of a pragmatic assessment of “what works”?

Good is what creates more life and more options. That’s a useful frame. The opposite of that would not be evil, but less life and fewer options.”

Stewart Brand, American writer, best known as editor of the Whole Earth Catalog, "Look At the World Through the Eyes Of A Fool", The European, 30.05.2011

See also:Whole Earth Catalogue


David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain


How our brain constructs reality

The conscious mind—which is the part of you that flickers to life when you wake up in the morning: that bit of you—it’s like a stowaway on a transatlantic steamship, that’s taking credit for the journey, without acknowledging all the engineering underfoot.

I think what this means when we’re talking about knowing ourselves is exactly
what it meant when people were trying to understand our place in the cosmos,
400 years ago, when Galileo discovered the moons of Jupiter and realized that, in fact, we’re not at the center of things, but instead we’re way out on a distant edge. That’s essentially the same situation we’re in, where we’ve fallen from the center of ourselves.

But in Galileo’s case, what that caused is we now have a much more nuanced view of the cosmos. As Carl Sagan was fond of saying, it’s more wondrous and subtle than we could have ever imagined. And I think it’s exactly the same thing going on with the brain: we’re falling from the center of the brain, but what we’re discovering is that it’s much more amazing than we could have ever thought when we imagined that we were the ones sort of at the center of everything and driving the boat. (…)

As we want to go on this journey of exploring what the heck we’re made out of, the first thing to do is to recognize that what you’re seeing out there is not actually reality. You’re not sort of opening your eyes, and voila, there’s the world. Instead, your brain constructs the world. Your brain is trapped in darkness inside of your skull, and all it ever sees are electrical and chemical signals. So all the colors you see, and so on, that doesn’t really exist; that’s an interpretation by your brain. (…)

All we’re actually doing is seeing an internal model of the world; we’re not seeing what’s out there, we’re seeing just our internal model of it. And that’s why, when you move your eyes around, all you’re doing is updating that model.

And for that matter, when you blink your eyes and there are 80 milliseconds of blackness there, you don’t notice that, either. Because it’s not actually about what’s coming in the eyes; it’s about your internal construction. And, in fact, as I mention in the book, we don’t even need our eyes to see. When you are asleep and dreaming, your eyes are closed, but you’re having full, rich visual experience —because it’s the same process of running your visual cortex, and then you believe that you are seeing. (…)

Because all the brain ever sees are these electrical and chemical signals, and it doesn’t necessarily know or care which ones are coming in through the eyes, or the ears, or the fingertips, or smell, or taste. All these things get converted just to electrical signals.

And so, it turns out what the brain is really good at—and the cortex in particular —is in extracting information that has some sort of useful correlation with things in the outside world. And so, if you feed, let’s say, visual input into your ears, you will figure out how to see through your ears. Because the brain doesn’t care how it gets there; all it cares about is, Oh, there’s structure to this data that I can extract. (…)

I think it’s sort of the most amazing thing about the way brains are built, is they’re constantly reconfiguring their own circuitry. (…)

It turns out that one of the main jobs of the brain is to save energy; and the way that it does this is by predicting what is going to come next. And if it sort of has a pretty good prediction of what’s happening next, then it doesn’t need to burn a lot of energy when that thing happens, because it’s already foreseen it. (…)

So, the job of the brain is to figure out what’s coming next; and if you have successfully done it, then there’s no point in consciousness being a part of what’s going on. (…)

Time perception

You’re not passively just watching the river of time flow by. Instead, just like with visual illusions, your brain is actively constructing time. (…)

When you can predict something, not only does your consciousness not come
online, but it feels like it goes very fast. So, driving to work is very fast; but the
very first time you did it, it seemed to take very long time. And it’s because of the
novelty and the amount of energy you had to burn the first time you did it—
before you were able to predict it.

Essentially what prediction means, if it’s something you’re doing a lot, is that
you’re actually reconfiguring the circuitry of the brain. You’re actually getting
stuff down into the circuitry, which gives you speed and efficiency, but at the cost
of conscious access. (…)

It’s not only the way we see vision and time, but it’s all of our cognition: it’s our morals, it’s what we’re attracted to, it’s what we believe in. All of these things are served up from these subterranean caverns of the mind. We often don’t have any access to what’s going on down there, and why we believe the things we do, why we act the way we do. (…)

The “illusion of truth”

You give people statements to rate the truth value of, and then you bring them back a while later and you give them more statements to say whether they’re true or false, and so on. But it turns out that if you repeat some of the statements from the first time to the second time, just because the people have heard them before, whether or not it’s true and whether or not they even marked it as false last time, because they’re hearing it again— unconsciously they know they’ve heard it before—they’re more likely to rate it as true now. (…)

I think this is part of the brain toolbox that children need: to really practice and learn skepticism and critical thinking skills. (…)

Some thoughts aren’t thinkable, because of the way that thoughts are constrained by our biology

Yes. As far as thoughts that we’re not able to think, that’s an idea that I just love to explore, because there’s all kinds of stuff we can’t see. Just as an example, if you take the electromagnetic radiation spectrum, what we call visible light is just one ten-billionth of that spectrum. So, we’re only seeing a very tiny sliver of that, because we have biological receptors that are tuned to that little part of the spectrum. But radio signals, and cell phone signals, and television signals, all that stuff is going right through your body, because you happen not to have biological receptors for that part of the spectrum.

So, what that means is that there’s a particular slice of the world that you can see. And what I wanted to explore in the book is that there’s also a slice of the world that you can think. In other words, because of evolutionary pressures, our psychology has been carved to think certain thoughts—this is the field known as evolutionary psychology—and that means there are other thoughts that are just like the cell phone signals, and radio signals, and so on, that we can’t even access.

Just as an example, try being sexually attracted to something that you’re not—like a chicken or a frog. But chickens and frogs find that to be the greatest thing in the world, to look at another chicken or frog. We only find that with humans. So, different species, which have otherwise pretty similar brains, have these very specific differences about the kinds of thoughts they can think. (…)

As far as nature vs. nurture goes, the answer nowadays is always both. It’s sort of a dead question to ask—nature vs. nurture—because it is absolutely true that we do not come to the table as a blank slate; we have a lot of stuff that we come to the table with predisposed. But the whole rest of the process is an unpacking of the brain by world experience. (…)

The brain as the team of rivals. Rational vs. emotional

So, the way your brain ends up in the end is a very complicated tangle of genetics and environment. And environment includes, not only all of your childhood experiences and so on, but your in utero environment, toxins in the air, the things that you eat, experiences of abuse, and all of that stuff—and your culture; your culture has a lot to do with the way your brain gets wired up. (…)

One of the culminating issues in the book is that your brain is really like a team of rivals, where you have these different neural subpopulations that are always battling it out to control the one-output channel of your behavior; and you’ve got all these different networks that are fighting it out. And so, there are parts of your brain that can be xenophobic, and other parts of your brain that maybe decide to overwrite that, and they’re not xenophobic. And I think this gives us a much more nuanced view, in the end, of who we are, and also who other people are. (…)

When people do neuroimaging studies, you can actually find situations where it looks like you have some parts that are doing essentially a math problem in the brain, and other parts that really care about how things feel, and how they’ll make the body feel. And you can image these different networks, and you can also see when they’re fighting one another when trying to do some sort of moral decision-making.

So, probably the best way for us to look at it is that when we talk about reason vs. emotion, we’re talking about sort of a summary—sort of a shorthand way of talking about these different neural networks. And, of course, decisions can be much more complicated than that, often. But sometimes they can be essentially boiled down to that.

It’s funny; the ancient Greeks also felt that this was the right way to divide it.
Again, it’s an oversimplification, but the Greeks had this idea that life is like
you’re a charioteer, and you’re holding the white horse of reason and the black horse of passion, and they’re both always trying to pull you off the road in different directions, and your job is to keep down the middle. And that’s about right. They had some insight there into that you do have these competing networks. (…)

The field of artificial intelligence

The field of artificial intelligence has become stuck, and I’m trying to figure
out why. I think it’s because when programmers are trying to make a robot do something, they come up with solutions: like here’s how you find the block of
wood, here’s how you grip the block of wood, here’s how you stack the block of
wood, and so on. And each time they make a little subroutine to take care of a
little piece of the problem; then they say, OK, good; that part’s done.

But Mother Nature never does that. Mother Nature chronically reinvents things all the time—accidentally. Just by mutation, there are always new ways to do things, like detect motion, or control muscles, or whatever it is that it’s trying to do—pick up on new energy sources, and so on. And as a result, what you have are multiple ways of solving problems in real biological creatures.

They don’t divide up neatly into little modules, the same way that a computer
program does, but instead, for example, in the mammalian cortex it appears that Mother Nature probably came up with about three or four different ways to detect motion. And all of these act like parties in the neural parliament. They all sort of think that they know how to detect motion best, and they battle it out with the other parties.

And so, I think this is one of the main lessons that we get, when we look for it, in what happens when we see brain damage in people. You can lose aspects of your vision and not lose other aspects; or, often, you can get brain damage and you don’t see a deficit at all, even though you’ve just sort of bombed out part of what you would expect to give a deficit.

In other words, you have this very complicated interaction of these different
parties that are battling it out. And I think they, in general, don’t divide neatly
along the cortical and subcortical division, but instead, whether in lizard brains
or in our brains, these networks can be made up of subcortical and cortical parts
together. (…)

The illusion we have that we have control

The analogy of a young monarch who takes the throne of his country, and takes credit for the glory of the country without thinking about the thousands of workers who are making it all work. And that’s essentially the situation we’re in.

Take, just as an example, when you have an idea, you say, ‘Oh, I just thought of
something.’ But it wasn’t actually you that thought of it. Your brain has been
working on that behind the scenes for hours or days, consolidating information,
putting things together, and finally it serves up something to you. It serves up an
idea; and then you take credit for it. But this whole things leads to this very
interesting question about the illusion we have that we have control. (…)

What does this mean for responsibility?

I think what it means is that when we look at something like the legal system, something like blameworthiness is actually the wrong question for us to ask. I mentioned before that brains end up being an end result of a very complicated process of genes intertwining with environment. So, in the end, when there’s a brain standing in front of the judge’s bench, it doesn’t matter for us to say, OK, well, are you blameworthy; to what extent are you blameworthy; to what extent was it your biology vs. you; because it’s not clear that there’s any meaningful difference between those two things, anyway.

I’m not saying this forgives anybody. We still have to take people off the street if they’re breaking the law. But what it means is that asking the question of blameworthiness isn’t where we should be putting our time. Instead, all we need to be doing is having a forward-looking legal system, where we say what do we do with you from here?

We don’t care how you got here, because we can’t ever know. It might have been
in utero cocaine poisoning, childhood abuse, lead paint on the walls, and all of
these other things that influenced your brain development, but we can’t untangle
that. And it’s not anybody’s fault. It’s not your fault or anybody else’s. But we
can’t do anything about it.

So, all we need to do is say, given the kind of person you are now, what is the
probability of recidivism. In other words, how likely are you to transfer this
behavior to a future situation and re-offend? And then we can predicate sentence
length on that probability of re-offense. And, equally as importantly, along with
customized sentencing, we can have customized rehabilitation.

So, there are lots of things that can go wrong with people’s brains that we can
usefully address, and try to help people, instead of throwing everybody in jail. As it stands now, 30% of the prison population has mental illness. Not only is that not a humane way for us to treat our mentally ill and make a de facto healthcare system, but it’s also not cost-effective.

And it’s also criminogenic—meaning it causes more crime. Because everybody
knows when you put people in jail, that limits their employment opportunities, it
breaks their social circles, and they end up coming back to the jail, more often
than not. So, it’s very clear how the legal system should be straightening itself out, just to make itself forward-looking, and saying, OK, all we need to do is get good at assessing risk into the future. (…)

A neural parliament

One of the really amazing lessons is this bit about being a neural parliament,
and not being made up of just one thing. I think this gives us a much better view
of why we can argue with ourselves, and curse at ourselves, and contract with
ourselves, and why we can do things where we look back and we think, Wow,
how did I do that? I’m not the kind of person who would do that.

But, in fact, you are many people. As Walt Whitman said, “I am large, I contain multitudes.” So, I think this gives us a better view of ourselves, and it also tells us ways to set up our own behavior to become the kind of people we want to be, by thinking about how to structure things in our life so that the short-term parties that are interested in instant impulse gratification—so that they don’t always win the battle.”

David Eagleman, neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action and the Initiative on Neuroscience and Law, Interview with Dr. David Eagleman, Author of Incognito: The Secret Lives of the Brain, Brain Science Podcast, Episode #75, Originally Aired 7/8/2011 (transcript in pdf) (Illustration source: David Plunkert for TIME)

The brain… it makes you think. Doesn’t it?

David Eagleman: “A person is not a single entity of a single mind: a human is built of several parts, all of which compete to steer the ship of state. As a consequence, people are nuanced, complicated, contradictory. We act in ways that are sometimes difficult to detect by simple introspection. To know ourselves increasingly requires careful studies of the neural substrate of which we are composed. (…)

Raymond Tallis: Of course, everything about us, from the simplest sensation to the most elaborately constructed sense of self, requires a brain in some kind of working order. (…)

[But] we are not stand-alone brains. We are part of community of minds, a human world, that is remote in many respects from what can be observed in brains. Even if that community ultimately originated from brains, this was the work of trillions of brains over hundreds of thousands of years: individual, present-day brains are merely the entrance ticket to the drama of social life, not the drama itself. Trying to understand the community of minds in which we participate by imaging neural tissue is like trying to hear the whispering of woods by applying a stethoscope to an acorn. (…)

David Eagleman: The uses of neuroscience depend on the question being asked. Inquiries about economies, customs, or religious wars require an examination of what transpires between minds, not just within them. Indeed, brains and culture operate in a feedback loop, each influencing the other.

Nonetheless, culture does leave its signature in the circuitry of the individual brain. If you were to examine an acorn by itself, it could tell you a great deal about its surroundings – from moisture to microbes to the sunlight conditions of the larger forest. By analogy, an individual brain reflects its culture. Our opinions on normality, custom, dress codes and local superstitions are absorbed into our neural circuitry from the social forest around us. To a surprising extent, one can glimpse a culture by studying a brain. Moral attitudes toward cows, pigs, crosses and burkas can be read from the physiological responses of brains in different cultures.

Beyond culture, there are fruitful questions to be asked about individual experience. Your experience of being human – from thoughts to actions to pathologies to sensations – can be studied in your individual brain with some benefit. With such study, we can come to understand how we see the world, why we argue with ourselves, how we fall prey to cognitive illusions, and the unconscious data-streams of information that influence our opinions.

How did I become aware enough about unawareness to write about it in Incognito? It was an unlikely feat that required millennia of scientific observation by my predecessors. An understanding of the limitations of consciousness is difficult to achieve simply by consulting our intuition. It is revealed only by study.

To be clear, this limitation does not make us equivalent to automatons. But it does give a richer understanding of the wellspring of our ideas, moral intuitions, biases and beliefs. Sometimes these internal drives are genetically embedded, other times they are culturally instructed – but in all cases their mark ends up written into the fabric of the brain. (…)

Neuroscience is uncovering a bracing view of what’s happening below the radar of our conscious awareness, but that makes your life no more “helpless, ignorant, and zombie-like” than whatever your life is now. If you were to read a cardiology book to learn how your heart pumps, would you feel less alive and more despondently mechanical? I wouldn’t. Understanding the details of our own biological processes does not diminish the awe, it enhances it. Like flowers, brains are more beautiful when you can glimpse the vast, intricate, exotic mechanisms behind them.”

David Eagleman, neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action, bestselling author

Raymond Tallis, British philosopher, secular humanist, poet, novelist, cultural critic, former professor of geriatric medicine at Manchester University

The brain… it makes you think. Doesn’t it?, The Guardian, The Observer, 29 April 2012.

See also:

Time and the Brain. Eagleman: ‘Time is not just as a neuronal computation—a matter for biological clocks—but as a window on the movements of the mind’
David Eagleman on the conscious mind
David Eagleman on Being Yourselves, lecture at Conway Hall, London, 10 April 2011.
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Your brain creates your sense of self, incognito, CultureLab, Apr 19, 2011.
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Iain McGilchrist on The Divided Brain and the Making of the Western World
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
The Relativity of Truth - a brief résumé, Lapidarium
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
☞ David Eagleman, Your Brain Knows a Lot More Than You Realize, DISCOVER Magazine, Oct 27, 2011
☞ David Eagleman, Henry Markram, Will We Ever Understand the Brain?, California Academy of Sciences San Francisco, CA, video, 11.02.2011
☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity, May, 2012
Mind & Brain tag on Lapidarium


Susan Blackmore on memes and “temes”     

                                              (Illustration credit: Collective Memes)

”[Darwin] had no concept of the idea of an algorithm. But that’s what he described in that book, and this is what we now know as the evolutionary algorithm. The principle is you just need those three things — variegation, selection and heredity. And as Dan Dennett puts it, if you have those then you must get evolution. Or design out of chaos without the aid of mind. (…)

The principle here applies to anything that is copied with variation and selection. We’re so used to thinking in terms of biology, we think about genes this way. Darwin didn’t of course, he didn’t know about genes. He talked mostly about animals and plants, but also about languages evolving and becoming extinct. But the principle of universal Darwinism is that any information that is varied and selected will produce design.

And this is what Richard Dawkins was on about in his 1976 bestseller, “The Selfish Gene.” The information that is copied, he called the replicator. It selfishly copies. (…)

Look around you, here will do, in this room. All around us, still clumsily drifting about in its primeval soup of culture, is another replicator. Information that we copy from person to person by imitation, by language, by talking, by telling stories, by wearing clothes, by doing things. This is information copied with variation and selection. This is design process going on. He wanted a name for the new replicator. So he took the Greek word mimeme, which means that which is imitated. (…)

There are two replicators now on this planet. From the moment that our ancestors, perhaps two and a half million years ago or so, began imitating, there was a new copying process. Copying with variation and selection. A new replicator was let loose, and it could never be — right from the start, it could never be that human beings who let loose this new creature, could just copy the useful, beautiful, true things, and not copy the other things. While their brains were having an advantage from being able to copy — lighting fires, keeping fires going, new techniques of hunting, these kinds of things — inevitably they were also copying putting feathers in their hair, or wearing strange clothes, or painting their faces, or whatever.

So you get an arms race between the genes which are trying to get the humans to have small economical brains and not waste their time copying all this stuff, and the memes themselves, like the sounds that people made and copied — in other words, what turned out to be language — competing to get the brains to get bigger and bigger. So the big brain on this theory is driven by the memes. (…)

Language is a parasite that we’ve adapted to, not something that was there originally for our genes, on this view. And like most parasites it can begin dangerous, but then it co-evolves and adapts and we end up with a symbiotic relationship with this new parasite.

And so from our perspective, we don’t realize that that’s how it began. So this is a view of what humans are. All other species on this planet are gene machines only, they don’t imitate at all well, hardly at all. We alone are gene machines and meme machines as well. The memes took a gene machine and turned it into a meme machine.

But that’s not all. We have new kind of memes now. I’ve been wondering for a long time, since I’ve been thinking about memes a lot, is there a difference between the memes that we copy — the words we speak to each other, the gestures we copy, the human things — and all these technological things around us? I have always, until now, called them all memes, but I do honestly think now we need a new word for technological memes.

Let’s call them technomemes or temes. Because the processes are getting different. We began, perhaps 5,000 years ago, with writing. We put the storage of memes out there on a clay tablet, but in order to get true temes and true teme machines, you need to get the variation, the selection and the copying, all done outside of humans. And we’re getting there. We’re at this extraordinary point where we’re nearly there, that there are machines like that. And indeed, in the short time I’ve already been at TED, I see we’re even closer than I thought we were before.

So actually, now the temes are forcing our brains to become more like teme machines. Our children are growing up very quickly learning to read, learning to use machinery. We’re going to have all kinds of implants, drugs that force us to stay awake all the time. We’ll think we’re choosing these things, but the temes are making us do it. So we’re at this cusp now of having a third replicator on our planet. Now, what about what else is going on out there in the universe? Is there anyone else out there? People have been asking this question for a long time. (…)

In 1961, Frank Drake made his famous equation, but I think he concentrated on the wrong things. It’s been very productive, that equation. He wanted to estimate N, the number of communicative civilizations out there in our galaxy. And he included in there the rate of star formation, the rate of planets, but crucially, intelligence.

I think that’s the wrong way to think about it. Intelligence appears all over the place, in all kinds of guises. Human intelligence is only one kind of a thing. But what’s really important is the replicators you have and the levels of replicators, one feeding on the one before. So I would suggest that we don’t think intelligence, we think replicators.

Think of the big brain. How many mothers do we have here? You know all about big brains. They’re dangerous to give birth to. Are agonizing to give birth to. My cat gave birth to four kittens, purring all the time. Ah, mm — slightly different.

But not only is it painful, it kills lots of babies, it kills lots of mothers, and it’s very expensive to produce. The genes are forced into producing all this myelin, all the fat to myelinate the brain. Do you know, sitting here, your brain is using about 20 percent of your body’s energy output for two percent of your body weight. It’s a really expensive organ to run. Why? Because it’s producing the memes. (…)

Well, we did pull through, and we adapted. But now, we’re hitting, as I’ve just described, we’re hitting the third replicator point. And this is even more dangerous — well, it’s dangerous again. Why? Because the temes are selfish replicators and they don’t care about us, or our planet, or anything else. They’re just information — why would they? They are using us to suck up the planet’s resources to produce more computers, and more of all these amazing things we’re hearing about here at TED. Don’t think, “Oh, we created the Internet for our own benefit.” That’s how it seems to us. Think temes spreading because they must. We are the old machines.

Now, are we going to pull through? What’s going to happen? What does it mean to pull through? Well, there are kind of two ways of pulling through. One that is obviously happening all around us now, is that the temes turn us into teme machines, with these implants, with the drugs, with us merging with the technology. And why would they do that? Because we are self-replicating. We have babies. We make new ones, and so it’s convenient to piggyback on us, because we’re not yet at the stage on this planet where the other option is viable. (…) Where the teme machines themselves will replicate themselves. That way, it wouldn’t matter if the planet’s climate was utterly destabilized, and it was no longer possible for humans to live here. Because those teme machines, they wouldn’t need — they’re not squishy, wet, oxygen-breathing, warmth-requiring creatures. They could carry on without us.

So, those are the two possibilities. The second, I don’t think we’re that close. It’s coming, but we’re not there yet. The first, it’s coming too. But the damage that is already being done to the planet is showing us how dangerous the third point is, that third danger point, getting a third replicator. And will we get through this third danger point, like we got through the second and like we got through the first? Maybe we will, maybe we won’t. I have no idea.”

Susan Blackmore, PhD, an English freelance writer, lecturer, and broadcaster on psychology, Susan Blackmore on memes and “temes”,, Feb 2008 (transcript)

See also:

What Defines a Meme? James Gleick: Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve
☞ Adam McNamara, Can we measure memes?, Department of Psychology, University of Surrey, UK


What Defines a Meme? James Gleick: Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve

With the rise of information theory, ideas were seen as behaving like organisms, replicating by leaping from brain to brain, interacting to form new ideas and evolving in what the scientist Roger Sperry called “a burstwise advance.” (Illustration by Stuart Bradford)

"When I muse about memes, I often find myself picturing an ephemeral flickering pattern of sparks leaping from brain to brain, screaming "Me, me!"Douglas Hofstadter (1983)

"Now through the very universality of its structures, starting with the code, the biosphere looks like the product of a unique event. (…) The universe was not pregnant with life, nor the biosphere with man. Our number came up in the Monte Carlo game. Is it any wonder if, like a person who has just made a million at the casino, we feel a little strange and a little unreal?"Jacques Monod (1970)

"What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions.”Richard Dawkins (1986)

"The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment. “If you want to understand life,” [Richard] Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” (…)

The rise of information theory aided and abetted a new view of life. The genetic code—no longer a mere metaphor—was being deciphered. Scientists spoke grandly of the biosphere: an entity composed of all the earth’s life-forms, teeming with information, replicating and evolving. And biologists, having absorbed the methods and vocabulary of communications science, went further to make their own contributions to the understanding of information itself.

Jacques Monod, the Parisian biologist who shared a Nobel Prize in 1965 for working out the role of messenger RNA in the transfer of genetic information, proposed an analogy: just as the biosphere stands above the world of nonliving matter, so an “abstract kingdom” rises above the biosphere. The denizens of this kingdom? Ideas.

Ideas have retained some of the properties of organisms,” he wrote. “Like them, they tend to perpetuate their structure and to breed; they too can fuse, recombine, segregate their content; indeed they too can evolve, and in this evolution selection must surely play an important role.”

Ideas have “spreading power,” he noted—“infectivity, as it were”—and some more than others. An example of an infectious idea might be a religious ideology that gains sway over a large group of people. The American neurophysiologist Roger Sperry had put forward a similar notion several years earlier, arguing that ideas are “just as real” as the neurons they inhabit. Ideas have power, he said:

"Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet.

Monod added, “I shall not hazard a theory of the selection of ideas.” There was no need. Others were willing.

Richard Dawkins made his own jump from the evolution of genes to the evolution of ideas. For him the starring role belongs to the replicator, and it scarcely matters whether replicators were made of nucleic acid. His rule is “All life evolves by the differential survival of replicating entities.” Wherever there is life, there must be replicators. Perhaps on other worlds replicators could arise in a silicon-based chemistry—or in no chemistry at all.

What would it mean for a replicator to exist without chemistry? “I think that a new kind of replicator has recently emerged on this very planet,” Dawkins proclaimed near the end of his first book, The Selfish Gene, in 1976. “It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.” That “soup” is human culture; the vector of transmission is language, and the spawning ground is the brain.

For this bodiless replicator itself, Dawkins proposed a name. He called it the meme, and it became his most memorable invention, far more influential than his selfish genes or his later proselytizing against religiosity. “Memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation,” he wrote. They compete with one another for limited resources: brain time or bandwidth. They compete most of all for attention. For example:

Ideas. Whether an idea arises uniquely or reappears many times, it may thrive in the meme pool or it may dwindle and vanish. The belief in God is an example Dawkins offers—an ancient idea, replicating itself not just in words but in music and art. The belief that Earth orbits the Sun is no less a meme, competing with others for survival. (Truth may be a helpful quality for a meme, but it is only one among many.)

Catchphrases. One text snippet, “What hath God wrought?” appeared early and spread rapidly in more than one medium. Another, “Read my lips,” charted a peculiar path through late twentieth-century America. “Survival of the fittest” is a meme that, like other memes, mutates wildly (“survival of the fattest”; “survival of the sickest”; “survival of the fakest”; “survival of the twittest”; … ).

Images. In Isaac Newton’s lifetime, no more than a few thousand peo­ple had any idea what he looked like, though he was one of England’s most famous men, yet now millions of people have quite a clear idea- based on replicas of copies of rather poorly painted portraits. Even more pervasive and indelible are the smile of Mona Lisa, The Scream of Edvard Munch, and the silhouettes of various fictional extraterrestrials. These are memes, living a life of their own, independent of any physical reality. “This may not be what George Washington looked like then,” a tour guide was overheard saying of the Gilbert Stuart painting at the Metropolitan Museum of Art, “but this is what he looks like now.” Exactly.

Memes emerge in brains and travel outward, establishing beachheads on paper and celluloid and silicon and anywhere else information can go. They are not to be thought of as elementary particles but as organisms. The number three is not a meme; nor is the color blue, nor any simple thought, any more than a single nucleotide can be a gene. Memes are complex units, distinct and memorable—units with staying power.

Also, an object is not a meme. The hula hoop is not a meme; it is made of plastic, not of bits. When this species of toy spread worldwide in a mad epidemic in 1958, it was the product, the physical manifestation, of a meme, or memes: the craving for hula hoops; the swaying, swinging, twirling skill set of hula-hooping. The hula hoop itself is a meme vehicle. So, for that matter, is each human hula hooper—a strikingly effective meme vehicle, in the sense neatly explained by the philosopher Daniel Dennett: “A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.” Hula hoopers did that for the hula hoop’s memes—and in 1958 they found a new transmission vector, broadcast television, sending its messages immeasurably faster and farther than any wagon. The moving image of the hula hooper seduced new minds by hundreds, and then by thousands, and then by millions. The meme is not the dancer but the dance.

For most of our biological history memes existed fleetingly; their main mode of transmission was the one called “word of mouth.” Lately, however, they have managed to adhere in solid substance: clay tablets, cave walls, paper sheets. They achieve longevity through our pens and printing presses, magnetic tapes and optical disks. They spread via broadcast towers and digital networks. Memes may be stories, recipes, skills, legends or fashions. We copy them, one person at a time. Alternatively, in Dawkins’ meme-centered perspective, they copy themselves.

“I believe that, given the right conditions, replicators automatically band together to create systems, or machines, that carry them around and work to favor their continued replication,” he wrote. This was not to suggest that memes are conscious actors; only that they are entities with interests that can be furthered by natural selection. Their interests are not our interests. “A meme,” Dennett says, “is an information-packet with attitude.” When we speak of fighting for a principle or dying for an idea, we may be more literal than we know.

Tinker, tailor, soldier, sailor….Rhyme and rhythm help people remember bits of text. Or: rhyme and rhythm help bits of text get remembered. Rhyme and rhythm are qualities that aid a meme’s survival, just as strength and speed aid an animal’s. Patterned language has an evolutionary advantage. Rhyme, rhythm and reason—for reason, too, is a form of pattern. I was promised on a time to have reason for my rhyme; from that time unto this season, I received nor rhyme nor reason.

Like genes, memes have effects on the wide world beyond themselves. In some cases (the meme for making fire; for wearing clothes; for the resurrection of Jesus) the effects can be powerful indeed. As they broadcast their influence on the world, memes thus influence the conditions affecting their own chances of survival. The meme or memes comprising Morse code had strong positive feedback effects. Some memes have evident benefits for their human hosts (“Look before you leap,” knowledge of CPR, belief in hand washing before cooking), but memetic success and genetic success are not the same. Memes can replicate with impressive virulence while leaving swaths of collateral damage—patent medicines and psychic surgery, astrology and satanism, racist myths, superstitions and (a special case) computer viruses. In a way, these are the most interesting—the memes that thrive to their hosts’ detriment, such as the idea that suicide bombers will find their reward in heaven.

When Dawkins first floated the meme meme, Nicholas Humphrey, an evolutionary psychologist, said immediately that these entities should be considered “living structures, not just metaphorically but technically”:

When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the meme’s propagation in just the way that a virus may parasitize the genetic mechanism of a host cell. And this isn’t just a way of talking-the meme for, say, “belief in life after death” is actually realized physically, millions of times over, as a structure in the nervous systems of individual men the world over.”

Most early readers of The Selfish Gene passed over memes as a fanciful afterthought, but the pioneering ethologist W. D. Hamilton, reviewing the book for Science, ventured this prediction:

"Hard as this term may be to delimit-it surely must be harder than gene, which is bad enough-I suspect that it will soon be in common use by biologists and, one hopes, by philosophers, linguists, and others as well and that it may become absorbed as far as the word “gene” has been into everyday speech."

Memes could travel wordlessly even before language was born. Plain mimicry is enough to replicate knowledge—how to chip an arrowhead or start a fire. Among animals, chimpanzees and gorillas are known to acquire behaviors by imitation. Some species of songbirds learn their songs, or at least song variants, after hearing them from neighboring birds (or, more recently, from ornithologists with audio players). Birds develop song repertoires and song dialects—in short, they exhibit a birdsong culture that predates human culture by eons. These special cases notwithstanding, for most of human history memes and language have gone hand in glove. (Clichés are memes.) Language serves as culture’s first catalyst. It supersedes mere imitation, spreading knowledge by abstraction and encoding.

Perhaps the analogy with disease was inevitable. Before anyone understood anything of epidemiology, its language was applied to species of information. An emotion can be infectious, a tune catchy, a habit contagious. “From look to look, contagious through the crowd / The panic runs,” wrote the poet James Thomson in 1730. Lust, likewise, according to Milton: “Eve, whose eye darted contagious fire.” But only in the new millennium, in the time of global electronic transmission, has the identification become second nature. Ours is the age of virality: viral education, viral marketing, viral e-mail and video and networking. Researchers studying the Internet itself as a medium—crowdsourcing, collective attention, social networking and resource allocation—employ not only the language but also the mathematical principles of epidemiology.

One of the first to use the terms “viral text” and “viral sentences” seems to have been a reader of Dawkins named Stephen Walton of New York City, corresponding in 1981 with the cognitive scientist Douglas Hofstadter. Thinking logically—perhaps in the mode of a computer—Walton proposed simple self-replicating sentences along the lines of “Say me!” “Copy me!” and “If you copy me, I’ll grant you three wishes!” Hofstadter, then a columnist for Scientific American, found the term “viral text” itself to be even catchier.

"Well, now, Walton’s own viral text, as you can see here before your eyes, has managed to commandeer the facilities of a very powerful host—an entire magazine and printing press and distribution service. It has leapt aboard and is now—even as you read this viral sentence—propagating itself madly throughout the ideosphere!”

Hofstadter gaily declared himself infected by the meme meme.

One source of resistance—or at least unease—was the shoving of us humans toward the wings. It was bad enough to say that a person is merely a gene’s way of making more genes. Now humans are to be considered as vehicles for the propagation of memes, too. No one likes to be called a puppet. Dennett summed up the problem this way: “I don’t know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people’s ideas renew themselves, before sending out copies of themselves in an informational diaspora…. Who’s in charge, according to this vision—we or our memes?”

He answered his own question by reminding us that, like it or not, we are seldom “in charge” of our own minds. He might have quoted Freud; instead he quoted Mozart (or so he thought):

“In the night when I cannot sleep, thoughts crowd into my mind…. Whence and how do they come? I do not know and I have nothing to do with it. Those which please me I keep in my head and hum them” (…)

Later Dennett was informed that this well—known quotation was not Mozart’s after all. It had taken on a life of its own; it was a fairly success­ful meme.

For anyone taken with the idea of memes, the landscape was changing faster than Dawkins had imagined possible in 1976, when he wrote, “The computers in which memes live are human brains.” By 1989, the time of the second edition of The Selfish Gene, having become an adept programmer himself, he had to amend that: “It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information.” Information was passing from one computer to another “when their owners pass floppy discs around,” and he could see another phenomenon on the near horizon: computers connected in networks. “Many of them,” he wrote, “are literally wired up together in electronic mail exchange…. It is a perfect milieu for self-replicating programs to flourish.” Indeed, the Internet was in its birth throes. Not only did it provide memes with a nutrient-rich culture medium, it also gave wings to the idea of memes. Meme itself quickly became an Internet buzzword. Awareness of memes fostered their spread. (…)

Is this science? In his 1983 column, Hofstadter proposed the obvious memetic label for such a discipline: memetics. The study of memes has attracted researchers from fields as far apart as computer science and microbiology. In bioinformatics, chain letters are an object of study. They are memes; they have evolutionary histories. The very purpose of a chain letter is replication; whatever else a chain letter may say, it embodies one message: Copy me. One student of chain-letter evolution, Daniel W. VanArsdale, listed many variants, in chain letters and even earlier texts: “Make seven copies of it exactly as it is written” (1902); “Copy this in full and send to nine friends” (1923); “And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life” (Revelation 22:19). Chain letters flourished with the help of a new 19th-century technology: “carbonic paper,” sandwiched between sheets of writing paper in stacks. Then carbon paper made a symbiotic partnership with another technology, the typewriter. Viral outbreaks of chain letters occurred all through the early 20th century. (…) Two subsequent technologies, when their use became widespread, provided orders-of-magnitude boosts in chain-letter fecundity: photocopying (c. 1950) and e-mail (c. 1995). (…)

Inspired by a chance conversation on a hike in the Hong Kong mountains, information scientists Charles H. Bennett from IBM in New York and Ming Li and Bin Ma from Ontario, Canada, began an analysis of a set of chain letters collected during the photocopier era. They had 33, all variants of a single letter, with mutations in the form of misspellings, omissions and transposed words and phrases. “These letters have passed from host to host, mutating and evolving,” they reported in 2003.

Like a gene, their average length is about 2,000 characters. Like a potent virus, the letter threatens to kill you and induces you to pass it on to your “friends and associates”—some variation of this letter has probably reached millions of people. Like an inheritable trait, it promises benefits for you and the people you pass it on to. Like genomes, chain letters undergo natural selection and sometimes parts even get transferred between coexisting “species.”

Reaching beyond these appealing metaphors, the three researchers set out to use the letters as a “test bed” for algorithms used in evolutionary biology. The algorithms were designed to take the genomes of various modern creatures and work backward, by inference and deduction, to reconstruct their phylogeny—their evolutionary trees. If these mathematical methods worked with genes, the scientists suggested, they should work with chain letters, too. In both cases the researchers were able to verify mutation rates and relatedness measures.

Still, most of the elements of culture change and blur too easily to qualify as stable replicators. They are rarely as neatly fixed as a sequence of DNA. Dawkins himself emphasized that he had never imagined founding anything like a new science of memetics. A peer-reviewed Journal of Memetics came to life in 1997—published online, naturally—and then faded away after eight years partly spent in self-conscious debate over status, mission and terminology. Even compared with genes, memes are hard to mathematize or even to define rigorously. So the gene-meme analogy causes uneasiness and the genetics-memetics analogy even more.

Genes at least have a grounding in physical substance. Memes are abstract, intangible and unmeasurable. Genes replicate with near-perfect fidelity, and evolution depends on that: some variation is essential, but mutations need to be rare. Memes are seldom copied exactly; their boundaries are always fuzzy, and they mutate with a wild flexibility that would be fatal in biology. The term “meme” could be applied to a suspicious cornucopia of entities, from small to large. For Dennett, the first four notes of Beethoven’s Fifth Symphony (quoted above) were “clearly” a meme, along with Homer’s Odyssey (or at least the idea of the Odyssey), the wheel, anti-Semitism and writing. “Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”

Yet here they are. As the arc of information flow bends toward ever greater connectivity, memes evolve faster and spread farther. Their presence is felt if not seen in herd behavior, bank runs, informational cascades and financial bubbles. Diets rise and fall in popularity, their very names becoming catchphrases—the South Beach Diet and the Atkins Diet, the Scarsdale Diet, the Cookie Diet and the Drinking Man’s Diet all replicating according to a dynamic about which the science of nutrition has nothing to say. Medical practice, too, experiences “surgical fads” and “iatro-epidemics”—epidemics caused by fashions in treatment—like the iatro-epidemic of children’s tonsillectomies that swept the United States and parts of Europe in the mid-20th century. Some false memes spread with disingenuous assistance, like the apparently unkillable notion that Barack Obama was not born in Hawaii. And in cyberspace every new social network becomes a new incubator of memes. Making the rounds of Facebook in the summer and fall of 2010 was a classic in new garb:

Sometimes I Just Want to Copy Someone Else’s Status, Word for Word, and See If They Notice.

Then it mutated again, and in January 2011 Twitter saw an outbreak of:

One day I want to copy someone’s Tweet word for word and see if they notice.

By then one of the most popular of all Twitter hashtags (the “hashtag” being a genetic—or, rather, memetic—marker) was simply the word “#Viral.”

In the competition for space in our brains and in the culture, the effective combatants are the messages. The new, oblique, looping views of genes and memes have enriched us. They give us paradoxes to write on Möbius strips. “The human world is made of stories, not people,” writes the novelist David Mitchell. “The people the stories use to tell themselves are not to be blamed.” Margaret Atwood writes: “As with all knowledge, once you knew it, you couldn’t imagine how it was that you hadn’t known it before. Like stage magic, knowledge before you knew it took place before your very eyes, but you were looking elsewhere.” Nearing death, John Updike reflected on

A life poured into words—apparent waste intended to preserve the thing consumed.

Fred Dretske, a philosopher of mind and knowledge, wrote in 1981: “In the beginning there was information. The word came later.” He added this explanation: “The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind.” Now we might add, thanks to Dawkins, that the transition was achieved by the information itself, surviving and perpetuating its kind and selectively exploiting organisms.

Most of the biosphere cannot see the infosphere; it is invisible, a parallel universe humming with ghostly inhabitants. But they are not ghosts to us—not anymore. We humans, alone among the earth’s organic creatures, live in both worlds at once. It is as though, having long coexisted with the unseen, we have begun to develop the needed extrasensory perception. We are aware of the many species of information. We name their types sardonically, as though to reassure ourselves that we understand: urban myths and zombie lies. We keep them alive in air-conditioned server farms. But we cannot own them. When a jingle lingers in our ears, or a fad turns fashion upside down, or a hoax dominates the global chatter for months and vanishes as swiftly as it came, who is master and who is slave?”

James Gleick, American author, journalist, biographer, Pulitzer Prize laureate, What Defines a Meme?, Smithsonian Magazine, May 2011. (Adapted from The Information: A History, A Theory, A Flood, by James Gleick)

See also:

Susan Blackmore on memes and “temes”
☞ Adam McNamara, Can we measure memes?, Department of Psychology, University of Surrey, UK
James Gleick: Bits and Bytes - How the language, information transformed humanity, (video), May, 19, 2011
James Gleick on information: The basis of the universe isn’t matter or energy — it’s data


The Psychology of Violence - a fascinating look at a violent act and a modern rethink of the psychology of shame and honour in preventing it

Thomas Jones, King’s Colledge To Wit, (the famous duel between the Duke of Wellington and the Earl of Winchilsea), 1829

Mercutio: Will you pluck your dagger from his pitcher? Make haste, lest mine be about you ere it be out.
Tybalt: I am for you.
Romeo: Gentle Mercutio, put thy weapon up.
Mercutio: Come, sir, your passado!

— (W. Shakespeare, Romeo and Juliet)

James Gilligan: “Violence itself is a form of communication, it’s a way of sending a message and it does that through symbolic means through damaging the body. But if people can express themselves and communicate verbally they don’t need violence and they are much less likely to use their fists or weapons as their means of communication. They are much more likely to use words. I’m saying this on the basis of clinical experience, working with violent people. (…)

I could point out there’s as much violence in the greatest literature in history, the Greek tragedies, Shakespeare’s tragedies and so on, as there is in popular entertainment. The difference is in the great literature violence is depicted realistically for what it really is, namely a tragedy. It’s tragic, it’s not entertainment, it’s not fun, it’s not exciting, it’s a tragedy. (…)

Mercutio: I am hurt.
Romeo: The hurt cannot be much.
Mercutio: No, no tis not so deep as a well nor so wide as a church door, but ‘tis enough: twill serve. A plague o’ both your houses.

— (W. Shakespeare, Romeo and Juliet)

Q: When Shakespeare wrote the tragedy Romeo and Juliet people were at each others’ throats, murder rates were at the highest in Europe’s history and murder took on a different meaning then. For men it was a case of kill or be killed in order to save face.

Pieter Spierenburg: It was basically seen so differently because of the idea of personal honour—specifically male honour—which depended on being prepared for violence.

JG: For example in virtually every language the words for masculinity are the same as the words for courage in Greek andrea, in Latin vir(?), vir is the word that means man but it also means soldier, to be a man is to be a soldier and it’s related to the Latin word for courage, virtus, which is the root of our word virtue but in that warlike culture courage was the prime virtue.

[From Romeo and Juliet]
Benvolio: Oh Romeo, Romeo, brave Mercutio’s dead.

PS: Murder which occurred when the conflict was about people’s reputations, that was easily understandable; the people around it, or the community regarded it as something that could happen. The authorities—and these were basically urban patricians who would be enmeshed in conflicts themselves—they understood especially that revenge; revenge was often officially legal and the first homicides in a conflict, an honourable conflict, would be at least treated leniently or punished with a fine or something.

Q: And killings that were carried out in defence of honour, were those killings seen as murder as we would conceptualise it?

PS: They would usually be regarded as honourable killings so at least excusable as something that perhaps should not have happened. But they could easily happen when two people who had a quarrel and a fight resulting from that quarrel. Murder in order to rob someone for material gain, that was viewed as dishonourable.

Q: What kinds of punishments were there for these two kinds of murder—an honourable one and a dishonourable one—were they very different?

PS: Yes, they were very different. Originally around 1300 the regular punishment for an honourable killing would be a fine or perhaps a banishment, whereas punishment for a treacherous murder would be execution.

Q: Was it a period do you think where the value of the human life was less than now?

PS: I think they still placed value on life but of course it was also a period when all people believed in an afterlife. But in terms of the worldly views they would indeed value honour, personal honour, they would value it more than life. Not only in the European Middle Ages but in many societies where you have open violence and ideas of male honour, your honour is worth more than your life. If you would have to choose

[From Romeo and Juliet]
Lady Capulet: I beg for justice which thou prince must give. Romeo slew Tybalt, Romeo must not live.
Prince: Romeo slew him, he slew Mercutio. Who now the price of this dear blood doth owe…?

Q: In this period of history that you’re talking about the conceptualisation of honour was firmly associated with the body, you suggest. What does that mean?

PS: Yes, the body or the physical person—it was anthropologists who first discovered the vestiges of that in Mediterranean societies, it was tied up with a kind of physical imagery and especially for men that strong men are associated with fierce animals and the like. Or that certain parts or the body symbolically play a big role in honour—your testicles, or your nose, because your nose is the first thing that goes into the world, it goes in front of you as it were.

Q: We’ve all heard the phrases ‘kiss and make up’ or ‘sealed with a kiss’; in fact these come from mediaeval legal ceremonies, elaborate affairs for two warring families to end a murderous feud.

Pieter Spierenburg: There existed rituals of reconciliation, and homicide was not really criminalised, but there was also the possibility of making peace between families who had been in a conflict or to reconcile after a single homicide so as to prevent events from occurring. The two parties, the killer and the most close relative of the victim, would kiss each other on the mouth and then other family members would also kiss each other on the mouth.

Q: These are parties to a vendetta?

Pieter Spierenburg: Yes, and that would seal the reconciliation. But it could also be done with a handshake. The reconciliation had clearly religious overtones and it was often done in a church or in a monastery. The family of the perpetrator would not only pay money but also pay for masses being said for the killed person, which again was also a very material thing because that benefited his soul, and they believed that he would be away from purgatory and into heaven more quickly if these masses were said or if he was made posthumously a monk. (…)

Q: So all of this shifts in the 16th century with an internal process that you call the spiritualisation of honour. Now what do you mean by that?

PS: Basically it means that honour moves away from being based on the body, being tied to the body, being based on preparedness to defend yourself and your dependants, and that you get other sources of honour that for example economic success or that even in a later period what they called sexual purity is a source of honour, being a good husband, a good head of the family, things that people take pride in and that becomes a source of honour— a man can be honourable without being violent.

Q: What was it that triggered that shift in the internal landscape of Europe that changed the way people thought about and conducted violence?

PS: Basically it’s triggered by a broad social change of which processes of state formation, the development of modern states, the establishment of monopolies of violence or monopolies of force, which means that stronger rulers are able to pacify a larger territory and establish a more or less stable government there. You get these monopolies of force with stronger rulers who established courts and then they assemble elite groups, aristocrats around them at courts and people at courts are obliged to behave in a more civilised way which includes a renunciation of personal violence. So they are prestigious people, elites, who still have a peaceful lifestyle and that becomes a kind of cultural model that is eventually within a few centuries also imitated by broader groups in society.

James Gilligan: When people experience their moral universe as going between the polar opposites of shame versus honour, or we could also say shame versus pride, they are more likely to engage in serious violence.

The more people have a capacity for feelings of guilt and feelings of remorse after hurting other people, the less likely they are to kill others. I think in the history of Europe what one can see is a gradual increase in moral development from the shame/honour code to the guilt/innocence code.

PS: They did no longer accept that killings would be a part of life as it were, in the early modern period it was only certain groups, more lower class people of the time who still accepted that fights could be honourable like knife fights or whatever, and that if someone died in a knife fight if this was an accident that of course was regrettable, but that could happen.

Q: In his history of murder Pieter Spierenburg has tracked a big drop in homicides across Europe from the 17th to the 19th centuries as a result. But murder hasn’t left us; people continue to kill each other. And there are other constants across the centuries. Young men have always committed most murders, alcohol has been in the mix too and so has that ever-present ingredient—honour.

JG: Hitler came to power on the campaign promise to undo what he called the shame of Versailles, meaning the Versailles Peace Treaty at the end of World War 1, which he felt had dishonoured Germany, or subjected Germany to national dishonour. And of course his solution for that, the way to restore Germany’s honour and undo the shame was do almost limitless violence. Even if one goes back to the first recorded wars in western history the wars were fought because the side that became the aggressor felt it had been shamed and humiliated by the group that they were attacking.

In the Iliad Menelaus the Greek king felt shamed and humiliated because a Trojan prince by the name of Paris ran off with his wife Helen (who became Helen of Troy) so the Greek army, Menelaus’s friends and family and partners started a war against Troy and burned the city down and killed all the men and took the women into slavery and so on.

Q: And so from the honour of the collective, or national honour, to the deeply personal.

[From Rumble Fish]
Midget: Hey, Rusty James, Biff Wilcox is looking for you Rusty James.
Rusty James: I’m not hiding.
Midget: He says he’s going to kill you Rusty James.
Rusty James: Saying ain’t doing. Shit! So what’s he doing about this, what’s he doing about killing me?
Midget: The deal is man you’re supposed to meet him tonight under the arches behind the pet store at about 10 o’clock.
Rusty James: He’s comin’ alone?
BJ Jackson: I wouldn’t count on it, man.
Rusty James: Well if he’s bringin’ friends then I’m bringin’ friends, man.

JG: The emotional cause that I have found just universal among people who commit serious violence, lethal violence is the phenomenon of feeling overwhelmed by feelings of shame and humiliation. I’ve worked with the most violent people our society produces who tend to wind up in our prisons. I’ve been astonished by how almost always I get the same answer when I ask the question—why did you assault or even kill that person? And the answer I would get back in one set of words or another but almost always meaning exactly the same thing would be, ‘Because he disrespected me,’ or ‘He disrespected my mother,’ or my wife, my girlfriend, whatever.

They use that word ‘disrespect ‘so often that they’ve abbreviated it into the slang term ‘he dissed me’, and it struck me that any time a word is used so often people start abbreviating it, it tells you something about how central that is in their moral and emotional vocabulary. (…)

Nazi Germany. After Hitler came to power millions of German citizens who had never been violent before became murderers. The widespread potential for homicidal behaviour became very clear after the Nazis came to power. To me what that shows empirically is that the potential for homicide is perhaps not totally universal. Certainly there are some people who would rather go to their own death than to kill somebody else, but certainly the potential for homicide is much, much wider than we customarily think. I say that not to be cynical about human nature, but simply because I think it’s important for us to be aware of that, so that we will not inadvertently engage in behaviours that will bring out the potential for violence.

When people feel ashamed or inferior to other people, they feel first of all a threat to the self, because we frequently call shame a feeling, but it’s actually the absence of a feeling. Namely the feeling of self love or pride and yet that absence of a feeling is actually one of the most painful experiences that human beings can undergo. In order to wipe out feelings of shame people can become desperate enough to kill others or to kill themselves. So what is important here is to find ways not only to reduce the intensity and frequency with which people are shamed or feel shamed, but also to increase their capacity to tolerate feelings of shame—you know, without acting out violently as a means of reducing those feelings.

Q: Our traditional response to a violent or murderous crime has been partly to remove the problem by imprisoning a person and removing them from society but this is also a form of punishment, in other words a form of shaming the individual. Is that an effective way of responding to a violent action?

JG: Well let me say first of all I do believe that if anybody is going around killing or committing other forms of violence, going around raping or whatever, that we do have to lock them up at least for as long as they are likely to behave in that way in the future. We do have to protect the public. But I would say that’s a tragic necessity and it is not cost-free and you put your finger on one of the problems, that it can shame a person even further. However, I think that the situation can be understood slightly differently if we understand that it’s vitally important how we treat people after we’ve locked them up. We don’t need to shame them.

Q: There would be the argument that would be made that if someone has committed a violent crime they serve a term of punishment for that crime.

JG: Well my experience and, as I said, there’s a lot of evidence to support this, is that punishment far from inhibiting violence is actually the most powerful stimulant of violence that we’ve discovered yet. For example the prisoners that I have known, the most violent of them had already been punished as seriously as it is possible to punish somebody without actually killing them. I’m talking about the degree of child abuse that these men had suffered. The most violent prisoners were the survivors of their own attempted murder, usually at the hands of their own parents or the survivors of the actual murders of their closest relatives.

Now if punishment would prevent or inhibit violence these should have been the most non-violent people on earth, instead they were the most violent. And I would say that the more we punish people in prisons, as opposed to simply restraining them, the more we stimulate violence, which is why prisons have always been called schools for crime.

Q: You’ve suggested based on this thinking that the best way to respond to and reduce violence is to see it as a symptom, and in that case that the disease is shame, is humiliation—how do you treat that disease?

JG: One of the most important I found working with violent criminals in prisons is to first of all treat everybody with respect, regardless of who they are or what they’ve done. The second thing though is to provide the resources that people need in order to gain self-respect and self-esteem or in other words pride, and feelings of self worth. When I was directing the mental health services for the Commonwealth of Massachusetts in the United States we did a study to find out what program in the prisons had been most effective in preventing re-offending or recidivism after prisoners left the prison. And we found one program that had been 100% successful over a 25-year period with not one of these individuals returning to prison because of a new crime.

And that program was the prisoners getting a college degree while in prison. The professors from Boston University had been donating their time to teach college credit courses and several hundred inmates had gotten a college degree while in prison, then left, went back into the community and did not return because of a new crime. I mean I think there are many reasons why that would have that effect but the most important I think emotionally is that education is one of the most direct ways by which people raise their level of self-esteem and feelings of self worth. When you gain knowledge and skills that you can respect in yourself and that other people respect you’re more likely to be treated with honour by other people and to have the feelings of pride or self worth that protect you against being overwhelmed by shame to the degree that stimulates violence.

Bandy Lee: We at one point felt that human nature was not malleable and that somehow the legacy of our ancestors left us with a nature that is inevitably violence prone and something that we cannot easily correct. The conception has been that there must be something fixed in the brain, that they must have been born with this condition, that it was either a genetic or neurological defect or some kind of faulty wiring that has caused individuals to become violent. What we’re finding is that a lot of the genetics and the neurobiology that even has a remote association to violence is actually shaped largely by environment.

Q: Some people are obviously more prone to violence because they have a personality disorder or neurological affliction that makes them impulsive and that’s really a subject for another show but the brain always sits in a social environment, and that was the focus of an innovative project called Resolve to Stop the Violence led by Yale University psychiatrists Professor Bandy Lee and James Gilligan.

BL: Because the public health approach is to look at things from the very basic level of prevention. In fact we’re going very far upstream in just the way that cleaning up the sewage system and personal hygiene habits would take care of a lot of the diseases as it did over the course of a large part of the 19th century. We are finding that preventive measures are far more effective than trying to treat the problems after their occurrence which a lot of physicians have done or even to try to prevent suicides or homicides immediately before it happens turns out to be very difficult to do. (…)

JG: One of the more interesting ones was a program that was designed to deconstruct and reconstruct what we call the male role belief system. That is the whole set of assumptions and beliefs and definitions, to which almost all men in our society are exposed in the course of growing up, as to how you define masculinity, what men have a right to expect from women, even what they have a right to expect from other men, and what they need to do to prove that they are men. (…)

BL: (…) So at this moment of fatal peril, instead of reacting violently in order to defend and re-affirm this hit man, they would give themselves a moment to take a breather and to engage in social ways. And having the experience of a pro-social way of interacting, of not having to fear one’s peers—and actually they had a mentor system whereby those who were in the program for longer periods would act as mentors to those who were just coming in to the program, were able to teach newcomers that they didn’t have to act violently in order to be accepted, in order to be safe, and this was quite a surprise for those entering into the program.

JG: What came out of this was they’re gaining an awareness that they had been making the assumption that the human world is divided into people who were superior and people who were inferior, and in that distinction men were supposed to be superior and women were supposed to be inferior. And not only that, a real man would be superior to other men.

Now this is a recipe for violence. But the moment they would fight against it, the individual would feel his masculinity was being challenged and to defend his masculinity he would have to resort to violence. What was amazing to me was how quickly they realised and got the point, felt they had been brainwashed by society and immediately wanted to start educating the new inmates who were coming in after them. We trained them to lead these groups themselves, you know sort of like as in Alcoholics Anonymous where people who have the problem sometimes turn out to be the best therapists for others who have the problem.

Q: And what about recidivism rates, were those reduced?

JG: The level of re-committing a violent crime was 83% lower in this group than in the control group. The rate of in-house violence actually dropped to zero in the experimental group. And what we were especially interested in was that reduction in violence continued not at 100% level but close to it, once they left the goal. (…)

I think we need to educate our children and our adults that violence is trivialised when it’s treated simply as entertainment. I call that the pornography of violence. If violence is understood for what it really is, which is the deepest human tragedy, then I think people might become more sympathetic to supporting the changes in our culture that actually would succeed in reducing the amount of violence that we experience. I think that we’re all becoming much more sensitised to the importance of social, and political and economic and cultural factors as factors the either stimulate violence or inhibit it and prevent it.”

Murder in mind, All In The Mind, ABC Radio National, 9 April 2011.

James Gilligan, clinical Professor of Psychiatry, Adjunct Professor in the School of Law, Collegiate Professor in the School of Arts and Science New York University

Pieter Spierenburg, Professor of Historical Criminology Erasumus University,
The Netherlands. He has published on executions, prisons, violence, and the culture of early modern Europe.

Bandy Lee, Assistant Clinical Professor of Psychiatry Yale University, USA

See also:

☞ Charles K. Bellinger, Theories on the Psychology of Violence: An Address to the Association of Muslim Social Scientists, University of Texas at Arlington
Scott Atran on Why War Is Never Really Rational, Lapidarium
The Philosophy of War, Internet Encyclopedia of Philosophy
Steven Pinker on the myth of violence, TED video, 2007
☞ Pauline Grosjean, A History of Violence: The Culture of Honor as a Determinant of Homicide in the US South, The University of New South Wales, August 25, 2011
Emiliano Salinas: A civil response to violence,, Nov 2010 (video)
Colman McCarthy, Teaching Peace, Hobart and William Smith Colleges, August 30, 2011
Steven Pinker on the History and decline of Violence
Violence tag on Lapidarium notes


Alex Rosenberg on the philosophy of science


"Philosophy studies the fundamental nature of existence, of man, and of man’s relationship to existence. In the realm of cognition, the special sciences are the trees, but philosophy is the soil which makes the forest possible." (Ayn Rand, Russian-American novelist, philosopher, playwright, and screenwriter)

The history of western philosophy is the history of the philosophy of science, and the first philosophers of science were the Greek philosophers who lived before and after Plato and Aristotle and simultaneously invented philosophy and science. Rationalism and Empiricism, the two dominant philosophies of the last 400 years, are almost entirely devoted to problems raised by science as it developed from Newton’s day to Einstein’s. So, the list of key figures is just the Who’s Who of western philosophy. As for the major figures working in the recent past whose work students should have a feel for, I’d start with Quine first of all, and then add Putnam, David Lewis, Popper, Salmon, and Kuhn, though he was a historian, not a philosopher. (…)

Science is the most reliable route to knowledge of the nature of reality. But in our culture and in others, this claim is persistently challenged by those who embrace one or another religious or other nonscientific source that claims to provide real understanding. (…)

Q: What is the relationship between philosophy and science, and how do philosophers and scientists view the field overall? (There’s a quote attributed to Richard Feynman, and repeated by Steven Weinberg, which one could interpret in a couple different ways: “Philosophy of physics is about as useful to physicists as ornithology is to birds.” What do you make of this comment?)

A.R.: Feynman was a wag, and a great philosopher of science himself (check out The Nature of Physical Law, a work of pure philosophy; Weinberg’s Dreams of a Final Theory has a lot of good philosophy of science in it too). Of course the answer to Feynman’s jape is that physicists know about as much about the philosophy of physics as birds know about ornithology! (I wish I had said that, but it was some funnier philosopher of science than me).

Einstein famously made it clear that it was by reading philosophers of science like Leibniz and Berkeley that he was led to the special theory of relativity. More examples of the relevance of the philosophy of science to science are provided by the very fruitful interaction of evolutionary biologists and philosophers of biology working at the intersection of their two disciplines on the nature of Darwinian natural selection, the levels at which it operates, and its applicability outside its original domain—especially in the human sciences.

There is no reliable generalization about how scientists and philosophers view the field of philosophy of science overall. But that is no more surprising than the fact that scientists and philosophers don’t even completely agree among themselves about how to view their own fields. (…)

I think philosophy of science is indispensible to philosophy as a whole, including the normative parts of philosophy—ethics and political philosophy. (…)

Q: How does a refusal within the scientific community to engage with philosophy impact the scientific disciplines? What do researchers in molecular biology, virology, physics, etc. stand to lose by not paying attention to the field?

A.R.: Insofar as philosophy is the custodian of science’s unanswered questions, or at least their deeper ones, it’s pretty obvious that doing philosophy of science helps delineate the boundaries of the scientists’ own disciplines and the boundaries, if any, of science as a whole. Understanding how science operates when faced with the special obstacles, standards, temptations of other disciplines, is bound to have an impact on how you pursue your own discipline and subdiscipline.

Finally, I think that studying the philosophy of science should give scientists more confidence in the scope, power and grounds of their profession’s abilities to answer questions, instead of, as so many people suppose, sapping the confidence of scientists in their enterprise. (…)

Q: In your opinion, what philosopher of science has had the greatest impact on science in the last 100 years?

A.R.: I think the answer to that question has to be W.V.O. Quine. By overthrowing the terms in which rationalists and empiricists debated epistemological questions and prioritized them over metaphysical ones, he reopened a region of inquiry that had been closed off by Logical Positivism for half a century. The excitement of our field is due as much to his work as to any other philosopher’s.

Q: What sort of challenges does the field face now and what challenges do you think will be most salient in the future?

A.R.: At the research frontiers of our field one big set of problems is in the philosophy of physics—trying to make sense of quantum mechanics—a theory that combines the greatest imaginable accuracy and breadth of predictive and explanatory precision, with complete unintelligibility. Add the testability problems and the multiverse possibilities of string theory, and it’s obvious why the philosophy of physics is ‘hot.’ Another area is my own special interest, the philosophy of biology, where questions about extending Darwinian theory beyond its original area of application—the evolution of lineages of individual organisms—to other levels of organization and even other domains, such as human behavior, cognition, morality, culture generally. Finally, the financial crisis has generated increased interest in the philosophy of science’s application to economics and the questions about its scientific prospects. Developments in each of these sciences and their cognate fields (for econ it would be cognitive social psychology, and evolutionary game theory) will set the agenda of the philosophy of science in the near future, I think.”

Alexander Rosenberg, American philosopher, and the R. Taylor Cole Professor of Philosophy at Duke University, What exactly is philosophy of science – and why does it matter?, Routledge Philosophy, 2011. (Illustration source)

See also:
Philosophy of Science Resources
Kant’s Philosophy of Science, Stanford Encyclopedia of Philosophy


When is it meaningful to say that one culture is more advanced than another?

                                  The Fractal Pattern of an African Village

"Is there a way to say that one culture is more advanced than another in a way that is not racist, ethnocentric, or uselessly broad? Some basic ideas in biology and thermodynamics may help.

For example, consider Kleiber’s Law. When it was first developed by Max Kleiber in the 1930s, it was used to describe animal metabolism. It says that that the bigger the animal, the more metabolically efficient it is. An elephant that is 10,000 times the mass of a guinea pig will not consume 10,000 times as much energy. Rather, it will consume only 1,000 times as much energy. Pound for pound, it is a more efficient energy user.

This isn’t surprising. But the law also applies to cities, which is very surprising. When a city doubles in size, it only consumes 85% as much energy. It becomes more efficient. The fact that the law works across completely different entities - plants, guinea pigs, elephants, cities - makes one wonder if it applies to all organized entities. To skip a number of qualifiers and exceptions, here’s the question I’m asking: Could it be said that the more metabolically efficient a society is, the more advanced it is?

To be sure, you will puzzle over the way I am using the word “advanced.” Set that aside for a moment and consider another example. Geoffrey West of the Santa Fe Institute has shown that with each doubling of a city’s population, the inhabitants become 15% wealthier, more productive, and more innovative. If one regards a city as a distinct culture, then one could say that larger cultures are inherently more advanced than smaller ones - assuming that by “advanced” one means greater productivity per capita. (The advancement is not always to the comfort of its inhabitants; West shows that crime also goes up by 15%. That is, the criminals become more productive, too.)

(For more on Kleiber’s Law and West’s Law, see Jonah Lehrer’s A Physicist Solves the City  and Steven Johnson’s Where Good Ideas Come From , pp. 7-10.) (…)

As Ong explains, preliterate thought is aggregative rather than analytic, situational rather than abstract. It is very different from literate thought - but literate people still have preliterate modes of aggregation and situational thinking available to them. We can understand preliterate minds with a little bit of imaginative effort.

In short, modern minds include all of the fundamental elements of medieval minds. (Again, I don’t mean that they include medieval skills; medieval people knew far more about herbs than just about anyone today. But we retain the idea of herbology and could easily resurrect it.)

As Kevin Kelly says in What Technology Wants , nothing invented is ever abandoned. It is always carried forward. You may have to dust off some old memories and study up a bit, but you’re rediscovering rather than learning from scratch. Our medieval visitor has no such advantage.

So I want to suggest that a culture can be said to be more advanced than another if it includes most or all of the other culture’s basic elements. That gives its individuals a superior edge in understanding and communication.

I got this idea of inclusiveness, by the way, from Ken Wilber's book Sex, Ecology, Spirituality , in which he writes, “Each emergent holon transcends but includes its predecessors” (p. 59). Wilber’s point is that evolution always builds on top of what went before, incorporating preceding elements while also transcending them.

To be sure, this doesn’t mean that a more advanced culture will treat a less advanced one with decency. The Europeans used their technological superiority over Native Americans to wipe most of them out. But they at least had a mental framework with which to categorize the peoples they met, however unjustly. Europeans had spent many thousands of years living in preliterate, tribal cultures and had that experience to draw upon. (…)

It focuses on communication, the key aspect of living in a society that makes it worthwhile (or not.) It’s a more humanly meaningful measure than variables like population, information content, and the number of available products.

Incidentally, it also makes diverse societies almost automatically more advanced than monocultural ones, all else equal. They simply contain more.

Non-ethnocentric, communication-oriented; that sounds pretty good, doesn’t it? But there is a hidden assumption in this reasoning that I will now make explicit. I am assuming that cultures follow a universal trajectory of development. Here’s one possible trajectory, for example: nomadic clans —> agricultural villages —> feudal towns —> city-states —> nation-states —> world-states. In this schema, each element includes all of the elements of the previous ones.

This particular trajectory seems pretty reasonable, but that’s because it’s both highly general and limited to one dimension of progress. The question is, is there a single, cross-cultural trajectory that specifies scientific, technological, moral, artistic, informational, and economic progress?

I don’t know. To be sure, counterexamples would seem to abound. Contingency is everywhere one looks. Arabs and Jews focused on abstract art while Europeans focused on representational art. Chinese medicine is holistic while Western medicine is reductive. These are generalizations, of course, but they are cases where neither includes the other.

But perhaps one just has to look at history with a broader focus. Do all cultures go through similar moral stages? Do they develop essentially similar social institutions for art? Will all cultures, given enough time, develop the computer? Is there a logic to history? (…)

We could argue, based on thermodynamics and the “inclusion” hypothesis, that there is a broad logic to history independent of biochemistry and accidents of culture. If so, then we could expect aliens to understand us, much as we’d understand our 13th-century Englishman. We probably couldn’t understand them, given our newness as a technological culture. But if they had a past that included tribalism, feudalism, nation-states, and world-states, then they’d have some idea of what we’re about. And maybe, just maybe, we’d learn of worthy futures to which we might aspire.”

Michael Chorost, American writer and teacher, Ph.D, Is There a Logic to History?, Psychology Today, April 11, 2011. (Illustration source)


John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices

M. C. Escher, Relativity (july 1953)

"It is then that the reader asks that crucial question, ‘What’s it all about?’ But what ‘it’ is, is not the actual text… but the text the reader has constructed under its sway. And that is why the actual text needs the subjunctivity that makes it possible for a reader to create a world of his [or her] own." Jerome Seymour Bruner, Actual Minds, Possible Worlds Cambridge, MA: Harvard University Press, 1986, p.37.

"Only in the stream of thought and life do words have meaning." Ludwig Wittgenstein, Zettel, G.E.M. Anscombe and G.H.V. Wright (Eds.). Oxford: Blackwell, 1981, no.173.

One of our tasks in understanding an Other, is to do justice to the uniqueness of their otherness. But this is not easy, for, as we shall see, it is in how they express themselves in dialogically structured events that occur between us only in unique, fleeting moments, that we can grasp who and what they are. (…)

In his review of George Steiner's essay ('A new meaning of meaning,' in TLS, 8th Nov, 1985), he comments that such a stance in art, is

"a belief that meaning (or meanings) lies in the work of art, embodied, incarnate, a real presence… It is a faith in meaning incarnate in the work of art that captures the ‘immensity of the commonplace’, that changes our very construction of reality: ‘poplars are on fire after Van Gogh’… The literary artist, it would follow from this argument, becomes an agent in the evolution of mind - but not without the co-option of the reader as his fellow author.”

Crossing boundaries

Almost all of us are now members of more than a single active culture. Thus the experience of having to ‘cross’ cultural boundaries, of having continually to ‘shift one’s stance’, of having to view one’s surroundings, fleeting aspect by fleeting aspect rather than perspectively (Wittgenstein, 1953), to make sense of what is happening around us while being ourselves in ‘motion’, so to speak, has now become a ‘normal’ activity. But what, as academics and intellectuals, must we do in the new dialogical, aspectival circumstances in which we now live, to pay attention to ‘the practices of Self’? Can we just apply our old and well tried methods to this new topic of study? Or must we, if we are to grasp the nature of such practices, invent some new methods, act in some new and different ways? (…)

Milan Kundera's comments - to do with us only very recently coming to a realization of the strangeness of the ordinary, the strangeness of the present moment in all its concreteness - are of crucial importance to us. For presently, as he points out:

"When we analyze a reality, we analyze it as it appears in our mind, in memory. We know reality only in the past tense. We do not know it as it is in the present, in the moment when it’s happening, when it is. The present moment is unlike the memory of it. Remembering is not the negative of forgetting. Remembering is a form of forgetting".

Similarly, Jerome Seymour Bruner (1986, p.13) remarks that what he calls the paradigmatic or logico- scientific mode of thought, "seeks to transcend the particular by higher and higher reaching for abstraction, and in the end disclaims in principle any explanatory value at all where the particular is concerned".

What Kundera and Bruner are reminding us of here, is not only that our current intellectual methods are monological and individualistic, and that as moderns we only really fully alive when set over against our surroundings all alone, but that we also import into our accounts of what happens around us, mythic abstractions of our own making. Positioning ourselves as if observers from afar of someone playing a back and forth, turn taking game - tennis say - we fail to realize that we are the other players in the game, that others act in response to how we act. Lacking any intellectual grasp of the relation of their activity to ours and to the circumstances we share with them, we try to explain what we observe of their activities as if originating solely from within them as self-contained individuals. Ignoring the ‘calls’ of their surrounding circumstances to which they ‘answer’, we invent mythic entities located inside them somewhere that, theoretically, we suppose causes them to act as they do (Wittgenstein, 1953), and set out to prove our theories true. (…)

As I see it, only if we institute a third, dialogical revolution of a kind that calls all our previous methods into question, and suggests wholly new intellectual practices and institutions to us, can we begin to fashion forms of inquiry that will do justice to the uniqueness of the being of Others. (…)

Psychology technicalized and demoralized

In attempting to bring ‘mind’ back into psychology, Bruner didn’t want just to add “a little mentalism” to behaviorism, but to do something much more profound: he wanted to discover and describe "what meaning-making processes were implicated" in people’s encounters with the world; its aim was “to prompt psychology to joining forces with its sister interpretative disciplines in the humanities and the social sciences”.

Indeed, although he admits that "we were slow to fully grasp what the emergence of culture meant for human adaptation and for human functioning" - to contrast with what he calls computationalism - he goes on to outline in this and in his latest book, The Culture of Education), a "second approach to the nature of mind - call it culturalism. It takes its inspiration from the evolutionary fact that mind could not exist save for culture." As he remarks in Acts of Meaning:

"What was obvious from the start was perhaps too obvious to be fully appreciated, at least by us psychologists who by habit and by tradition think in rather individualist terms. The symbolic systems that individual used in constructing meaning were systems that were already in place, already ‘there’, deeply entrenched in culture and language. They constituted a very special kind of communal tool kit whose tools, once used, made the user a reflection of the community… As Clifford Geertz puts it, without the constituting role of culture we are ‘unworkable monstrosities… incomplete or unfinished animals who complete or finish ourselves through culture.”

The ‘movements’ at work in our dialogic encounters with an Other

To refer to issues he has brought to our attention, let me now return to Bruner’s  account of narrative modes of thought in his ‘Two modes…' (…) In the story, Marco Polo tells Kublai Khan of a stone bridge, describing it stone by stone. But Kublai Khan gets impatient and seeks what some of us would now call ‘the bottom line’, and asks what supports the stones? ‘The bridge is not supported by one stone or another,’ Marco answers, ‘but by the line of the arch that they form.’ Then ‘Why do you speak to me of the stones?,’ Kublai Khan demands. ‘Without stones there is no arch,’ Polo replies - for the arch is ‘in’ the relations between the stones. And as Bruner goes on to point out, in their reading of the story, the reader goes from stones to arches to the significance of arches to some broader reality - goes back and forth between them in attempting finally to construct a sense of the story, its form, its meaning. Sometimes in reading stories, we can attend from the relations among their particularities to something much more general. But, what kind of textual structures allow or invite such a move? How is the sense of a more general significance achieved? And ‘in’ what does that more general significance consist?

It is only in our reading of texts of a narrative kind, Bruner maintains, that we can encounter others or othernesses that are strange and novel to us. In reading such texts, individuals begin to construct what Bruner a ‘virtual text’ of their own - where it is as if readers

were embarking on a journey without maps… [Where] in time, the new journey becomes a thing in itself, however much its initial shape was borrowed from the past. The virtual text becomes a story of its own, its very strangeness only a contrast with the reader’s sense of the ordinary… [This] is why the actual text needs the subjunctivity that makes it possible for a reader to create a world of his [or her] own' (Bruner, 1986, pp.36-37).

To repeat: It is the way in which such texts ‘subjunctivize reality’ - or traffic ‘in human possibilities rather than settled certainties,’ as he puts it (Bruner, 1986, p.26) - that makes the co-creation of such virtual worlds between authors and their readers possible. (…)

As he points out, the existence of conventions and maxims that are constitutive of a normative background to our activities, ‘provides us with the means of violating them for purposes of meaning more than we say or for meaning other than what we say (as in irony, for example) or for meaning less than we say (Bruner, 1986, p.26).

This background, and the possibility of us deviating from it, is crucial to his whole approach. Indeed, he emphasizes it again in Acts of Meaning, where he comments on his efforts to describe a people’s ‘folk psychology’ as follows: ‘I wanted to show how human beings, in interacting with one another, form a sense of the canonical and ordinary as a background against which to interpret and give narrative meaning to breaches in and deviations from ‘normal’ states of the human condition' (Bruner, 1990, p.67).

It is the very creation of indeterminacy and uncertainty by the devices people use in their narrative forms of thought and talk, that make it possible for them to co-create unique meanings between them as their dialogical activities unfold. ‘To mean in this way,’ suggests Bruner, ‘by the use of such intended violations… is to create ‘gaps’ and to recruit presuppositions to fill them. Indeed, our own unique responses to our own unique circumstances are ‘carried’ in the subtle variations in how we put these constitutive forms of response to use, as we bodily react, and thus relate ourselves, to what goes on around us. This is what it is for us to perform meaning. And we ‘show’ our understanding of such ‘performed meanings’ in our ways of ‘going on’ with the others around us in practice - to put the matter in Wittgenstein’s (1953) terms. I shall call the kind of meaning involved here, that are only intelligible to us against an already existing background of the activities constitutive of our current forms of life, joint, first-time - or only ‘once occurrent’ (Bakhtin, 1993, p.2) - variational meanings, that are expressive of the ‘world’ of an unique ‘it’ or ‘I’. (…)

In exploring the problem of how it is possible to perform meaning in practice, of how, say, the process of intending might work, Wittgenstein suggests that we might feel tempted to say that such a process ‘can do what it is supposed to only by containing an extremely faithful picture of what it intends.’ But having said this much, he goes on to point out:

"That that too does not go far enough, because a picture, whatever it may be, can be variously interpreted; hence this picture too in its turn stands isolated. When one has the picture in view by itself it is suddenly dead, and it is as if something had been taken away from it, which had given it life before… it remains isolated, it does not point outside itself to a reality beyond.

Now one says: ‘Of course, it is not the picture itself that intends, but we who use it to intend something’. But if this intending, this meaning, is in turn something that is done with the picture, then I cannot see why it has to involve a human being. The process of digestion can also be studied as a chemical process, independently of whether it takes place in a living being. We want to say ‘Meaning is surely essentially a mental process, a process of conscious life, not of dead matter’…

And now it seems to us as if intending could not be any process at all, of any kind whatever. - For what we are dissatisfied with here is the grammar of process, not with the specific kind of process. - It could be said: we should call any process ‘dead’ in this sense’ (no. 236). ‘It might almost be said,’ he adds: 'Meaning moves, whereas a process stands still”.

Meaning as movement

In other words, instead of meaning being a cognitive process of statically ‘picturing’ something, Wittgenstein sees it here in a quite different light: as part of an ongoing, dynamic, interactive process in which people as embodied agents are continuously reacting in a living, practical way, both to each other and to their circumstances.

Thus, even as a person is speaking, the bodily and facial responses of the others around them to what they say, are acting back upon them to influence them moment by moment in their ‘shaping’ of their talk as it unfolds. In such circumstances as these, we are inevitably doing much more than merely talking ‘about’ something; we are continuously living out changing ‘ways of relating’ ourselves to our circumstances, of our own creation; or as Wittgenstein (1953) would say, we are creating certain, particular ‘forms of life’.

Thus, in practice, as we tack back and forth between the particular words of a strange, newly encountered, meaning- indeterminate story or text, and the whole of the already ongoing, unsayable, dynamic cultural history in which we all are, in different ways, to some extent, immersed, we perform meaning. In so doing, in ‘bridging the gaps’ with the responsive movements we make as we read, we creatively ‘move’ over what Bruner (1986) calls the ‘landscapes’ of a ‘virtual text.’ And what is general in our reading, what we can ‘carry over’ from what we do as we read into the doing of other activities, are these responsive ‘ways of moving’ of our own spontaneous creation - ways of ‘orchestrating’ our moment by moment changing relations to our past, our future, the others around us, our immediate physical surroundings, authorities, our cultural history, our dreams for the future, and so on, relating ourselves in these different directions perceptually, cognitively, in action, in memory, and so on (Vygotsky, 1978, 1986). We can ‘carry over’ into new spheres of activity what is ‘carried in’ our initial ways of bodily responding to a text in the first place.

Viewed in this way, as calling out from us possibly quite new, first-time responsive movements, rather than as being about something in the world, such meaning indeterminate texts can be seen as a special part of the world, an aspect of our surroundings to which we cannot not - if we are to grasp their meaning for us - relate ourselves in a living way. So, although such texts may seem to be not too different from those presented as being ‘about’ something - that is, from texts with a representational-referential meaning that ‘pictures’ a state of affairs in the world - their meaning cannot be found in such a picturing. We must relate ourselves to them in a quite different way.

For their meaning is of a much more practical, pre-theoretical, pre-conceptual kind: to do with providing us with way or style of knowing, rather than with a knowledge or ‘picture’ of something in particular. To put it another way: in its reading, such texts are exemplary for not of a certain way of going on. It is exemplary for a new way of relating ourselves to our circumstances not before followed; it provides us with new poetic images through which, possibly, to make sense of things, not images or representations of things already in existence.

Concerning the creative effects of certain styles or genres of writing on us, or works of art in general, Susan Sontag (1962) has written:

To become involved with a work of art entails, to be sure, the experience of detaching oneself from the world. But the work of art itself is also a vibrant, magical, and exemplary object which returns us to the world in some way more open and enriched…

Raymond Bayer has written: ‘What each and every aesthetic object imposes on us, in appropriate rhythms, is a unique and singular formula for the flow of our energy… Every work of art embodies a principle of proceeding, of stopping, of scanning; an image of energy or relaxation, the imprint of a caressing or destroying hand which is [the artist’s] alone’. We can call this the physiognomy of the work, or its rhythm, or, as I would rather do, its style (p.28).

Where the function of such a ‘moving’ form of communication is, not only to make a unique other or otherness we have not previously witnessed, present to us for the very first time, but to provide us with the opportunity to embody the new ‘way of going on’ that only it can call out from us. But to do this, to come to embody its ‘way’, we must encounter and witness its distinct nature in all its complex detail. If we turn too quickly merely to its explanation, not only do we miss what new it can teach us, but the turn is pointless: for, literally, we do not yet know what we are talking about.

As this stance toward meaning as living, only once occurrent, joint, variational movement, is still very unfamiliar to us, let me explore its nature yet a little more: Remarking further about the living nature of meaning, Wittgenstein (1981) comments that he wants to say that When we mean something, it’s like going up to someone, it’s not having a dead picture (of any kind)’. We go up to the thing we mean (Wittgenstein, 1953, no.455).

For instance, as we view, say, a picture such as Van Gogh's Sunflowers, we can enter into an extended, unfolding, living relation with it, one that ebbs and flows, that vacillates and oscillates, as we respond to it in different ways. What we sense, we sense from inside our relations to it: ‘It is as if at first we looked at a picture so as to enter into it and the objects in it surrounded us like real ones; and then we stepped back, and were now outside it; we saw the frame and the picture was a painted surface. In this way, when we intend, we are surrounded by our intention’s pictures, and we are inside them' (1981, no.233).

Indeed, he says elsewhere: It often strikes is as if in grasping meaning the mind made small rudimentary movements, like someone irresolute who does not know which way to go - i.e., it tentatively reviews the field of possible applications (Wittgenstein, 1981, no.33).

The novelist John Berger (1979) has also written about the act of writing in a similar fashion:

The act of writing is nothing except the act of approaching the experience written about; just as, hopefully, the act of reading the written text is a comparable act of approach. To approach experience, however, is not like approaching a house. ‘Life’, as the Russian proverb says, ‘is not a walk across an open field’. Experience is indivisible and continuous, at least within a single lifetime and perhaps over many lifetimes. I never have the impression that my experience is entirely my own, and it often seems to me that it preceded me. In any case experience folds back on itself, refers backwards and forwards to itself through the referents of hope and fear; and, by the use of metaphor, which is at the origin of language, it is continually comparing like with unlike, what is small with what is large, what is near with what is distant. And the act of approaching a given moment of experience involves both scrutiny (closeness) and the capacity to connect (distance).

The movement of writing resembles that of a shuttle on a loom: repeatedly it approaches and withdraws, closes in and takes its distance. Unlike a shuttle, however, it is not fixed to a static frame. as the movement of writing itself, its intimacy with the experience increases. Finally, if one is fortunate, meaning is the fruit of this intimacy.” (John Berger, 1979, p.6, my emphases).


Describing (and explaining?) the dialogical: ‘the difficulty here is: to stop’

Although such a way of looking for the fleeting, only once occurrent details of our interactions is not easy to implement, it is of the crux. For, as he puts it, the problems we face are not empirical problems to be solved by giving explanations: ‘they are solved, rather, by looking into the workings of our language, and that in such a way as to make us recognize those workings: in spite of an urge to misunderstand them. The problems are solved, not by giving new information, but by arranging what we have already known (no.109) - but which so far, has passed us by in our everyday dealings with each other unnoticed.

Thus, as Wittgenstein (1953) sees it, although not easily accomplished, the task is not to imagine, and then to empirically investigate possible ‘mechanisms’ within us responsible for us being able to mean things to each other, but to describe how we in fact do do it in practice. Indeed, to repeat Kundera’s (1993) remark above:an event as we imagine it hasn’t much to do with the same event as it is when it happens (p.139) - for we can only theorize events as distinct upon their completion, after they have made one or another kind of sense, once they have an already achieved meaning. Something incomplete, something that we are still in the middle of, something that we are still involved in or ‘inside of’, cannot properly be described in a theoretically distinct way.

Thus, if we still nonetheless attempt to do so, we will miss out - or better, we will tend to overlook - many of its most significant details; and in so doing, we will find ourselves puzzled as to how we do in fact manage the doing of meaning between us. There must - we will say to each other - be something else that we have missed, something hidden in what we do when we mean things to each other, that needs discovering and explaining. But, suggests Wittgenstein (1953), in asking and answering his own question: ‘How do sentences do it [i.e., manage to represent something]? - Don’t you know? For nothing is hidden”. (…) There are countless kinds: countless different kinds of use of what we call ‘symbols’, ‘words’, ‘sentences’. And this multiplicity is not something fixed, given once for all; but types of language, new language-games, as we say, come into existence, and others become obsolete and get forgotten.

Once we go beyond the confines of established language-games, we are once again in the realm of the indeterminate, where are meanings are ambiguous and can only be made determinate by us ‘playing them out’, so to speak, within a practice. Our language-games cannot themselves be explained, as they are the bases in terms of which all our explanations in fact work as explanations. (…)

Instead of a theoretical, explanatory account of their workings, we need first to come to a practical understanding of the joint, dialogical nature of our lives together. And if we are to do that, if we are to see, as Bruner puts it, the ways in which we ‘violate’ the norms of our institutions, then, we also must violate the norms of our institutions.”

John Shotter, Emeritus Professor of Communication in the Department of Communication, University of New Hampshire, Towards at third revolution in psychology: From inner mental representation to dialogical social practices

See also:

The Relativity of Truth - a brief résumé, Lapidarium
Map–territory relation - a brief résumé, Lapidarium


'We' vs 'Others': Russell Jacoby on why we should fear our neighbors more than strangers

                                         Titian, “Cain and Abel”, Venice

"Orientalism was ultimately a political vision of reality whose structure promoted the difference between the familiar (Europe, the West, ‘us’) and the strange (the Orient, the East, ‘them’)"Edward Said, Orientalism (1978)

"Academics are thrilled with the "other" and the vagaries of how we represent the foreign. By profession, anthropologists are visitors from afar. We are outsiders, writes an anthropologist, "seeking to understand unfamiliar cultures." Humanists and social theorists also have fallen in love with the "other." A recent paper by the literary critic Toril Moi is titled "Literature, Philosophy, and the Question of the Other." In a recent issue of Signs, a philosopher writes about “Occidental Dreams: Orientalism and History in ‘The Second Sex.’”

The romance with the “other,” the Orient, and the stranger, however, diverts attention from something less sexy: the familiar. For those concerned with strife and violence in the world, like Said, the latter may, in fact, be more critical than the strange and the foreign. If the Lebanese Civil War, which lasted 15 years, can highlight something about how the West represents the East, it can also foreground a neglected truth: The most decisive antagonisms and misunderstandings take place within a community. The history of hatred and violence is, to a surprising degree, a history of brother against brother, not brother against stranger. From Cain and Abel to the religious wars of the 16th and 17th centuries and the civil wars of our own age, it is not so often strangers who elicit hatred, but neighbors.

This observation contradicts both common sense and the collective wisdom of teachers and preachers, who declaim that we fear—sometimes for good reason—the unknown and dangerous stranger. Citizens and scholars alike believe that enemies lurk in the street and beyond the street, where we confront a “clash of civilizations” with foreigners who challenge our way of life.

The truth is more unsettling. From assault to genocide, from assassination to massacre, violence usually emerges from inside the fold rather than outside it. (…)

We may obsess about strangers piloting airplanes into our buildings, but in the United States in any year, roughly five times the number of those killed in the World Trade Center are murdered on the streets or inside their own homes and offices. These regular losses remind us that most criminal violence takes place between people who know each other. Cautious citizens may push for better street lighting, but they are much more likely to be assaulted, even killed, in the light of the kitchen by someone familiar than in a parking garage by a stranger. Like, not unlike, prompts violence.

Civil wars are generally more savage, and bear more lasting consequences, than wars between countries. Many more people died in the American Civil War—at a time when the population was a tenth of what it is today—than in any other American conflict, and its long-term effects probably surpass those of the others. Major bloodlettings of the 20th century—hundreds of thousands to millions of deaths—occurred in civil wars such as the Russian Civil War, the Chinese Civil Wars of 1927-37 and 1945-49, and the Spanish Civil War. More Russian lives were lost in the Russian Civil War that followed World War I than in the Great War itself, for instance.

But who cares about the Russian Civil War? A thousand books and courses dwell on World War I, but few on the Russian Civil War that emerged from it. That war, with its fluid battle lines, uncertain alliances, and clouded beginning, seems too murky. The stew of hostilities is typical of civil wars, however. With some notable exceptions, modern civil wars resist the clear categories of interstate wars. The edges are blurred. Revenge often trumps ideology and politics.

Yet civil strife increasingly characterizes the contemporary world. “Most wars are now civil wars,” announces the first sentence of a World Bank publication. Not only are there more civil wars, but they last longer. The conflicts in southern Sudan have been going on for decades. Lengthy battles between states are rare nowadays. And when states do attack, the fighting generally doesn’t last long (for example, Israel’s monthlong incursion into Lebanon in 2006). The recent wars waged by the United States in Iraq and Afghanistan are notable exceptions.

We live in an era of ethnic, national, and religious fratricide. A new two-volume reference work on “the most severe civil wars since World War II” has 41 entries, from Afghanistan and Algeria to Yemen and Zimbabwe. Over the last 50 years, the number of casualties of intrastate conflicts is roughly five times that of interstate wars. The number of refugees from these conflicts similarly dwarfs those from traditional state-versus-state wars. “Cases such as Afghanistan, Somalia, and Lebanon testify to the economic devastation that civil wars can produce,” note two political scientists. By the indexes of deaths, numbers of refugees, and extent of destruction, they conclude that "civil war has been a far greater scourge than interstate war" in recent decades. In Iraq today—putting aside blame and cause—more Iraqis are killed by their countrymen than by the American military.

"Not surprisingly, there is no treatise on civil war on the order of Carl von Clausewitz's On War,” writes the historian Arno Mayer, “civil wars being essentially wild and savage.”

The iconic book by Carl von Clausewitz, the Prussian military thinker, evokes the spirit of Immanuel Kant, whose writings he studied. Subheadings such as “The Knowledge in War Is Very Simple, but Not, at the Same Time, Very Easy” suggest its philosophical structure. Clausewitz subordinated war to policy, which entailed a rational evaluation of goals and methods. He compared the state to an individual. “Policy” is “the product of its brain,” and war is an option. “No one starts a war—or rather, no one in his senses ought to do so—without first being clear in his mind what he intends to achieve by that war and how he intends to conduct it.” If civilized nations at war “do not put their prisoners to death” or “devastate cities,” he writes, it is because “intelligence plays a larger part in their methods of warfare … than the crude expressions of instinct.”

In civil wars, by contrast, prisoners are put to death and cities destroyed as a matter of course. The ancient Greeks had already characterized civil strife as more violent than traditional war. Plato distinguishes war against outsiders from what he calls factionalized struggles, that is, civil wars. He posits that Greeks practice war against foreigners (“barbarians”), a conflict marked by “enmity and hatred,” but not against one another. When Greeks fight Greeks, he believes, they should temper their violence in anticipation of reconciliation. “They will not, being Greeks, ravage Greek territory nor burn habitations,” nor “lay waste the soil,” nor treat all “men, women, and children” as their enemies. Such, at least, was his hope in the Republic, but the real world often contradicted it, as he knew. His proposition that Greeks should not ravage Greeks challenged the reality in which Greeks did exactly that.

Plato did not have to look further than Thucydides' account of the Peloponnesian War to find confirmation of the brutality of Greek-on-Greek strife. In a passage often commented on, Thucydides wrote of the seesaw battle in Corcyra (Corfu) in 433 BC, which prefigured the larger war. When the Athenians approached the island in force, the faction they supported seized the occasion to settle accounts with its adversaries. In Thucydides’ telling, this was a “savage” civil war of Corcyrean against Corcyrean. For the seven days the Athenians stayed in the harbor, Corcyreans “continued to massacre those of their own citizens” they considered enemies. “There was death in every shape and form,” writes Thucydides. “People went to every extreme and beyond it. There were fathers who killed their sons; men were dragged from the temples or butchered on the very altars.” Families turned on families. “Blood ties became more foreign than factional ones.” Loyalty to the faction overrode loyalty to family members, who became the enemy.

Nearly 2,500 years after Thucydides, the presiding judge at a United Nations trial invoked the Greek historian. The judge reflected on what had occurred in the former Yugoslavia. One Duško Tadić stood accused of the torture and murder of Muslims in his hometown in Bosnia-Herzegovina. His actions exemplified a war of ethnic cleansing fueled by resentment and hatred. “Some time ago, yet not far from where the events in this case happened,” something similar occurred, stated a judge in his 1999 opinion. He cited Thucydides’ description of the Corcyrean civil war as one of “savage and pitiless actions.” Then as today, the judge reminded us, men “were swept away into an internecine struggle” in which vengeance supplanted justice.

Today’s principal global conflicts are fratricidal struggles—regional, ethnic, and religious: Iraqi Sunni vs. Iraqi Shiite, Rwandan Tutsi vs. Rwandan Hutu, Bosnian Muslim vs. Balkan Christians, Sudanese southerners vs. Sudanese northerners, perhaps Libyan vs. Libyan. As a Rwandan minister declared about the genocide in which Hutus slaughtered Tutsis: “Your neighbors killed you.” A reporter in northeastern Congo wrote that in seven months of fighting there, several thousand people were killed and more than 100,000 driven from their homes. He commented, "Like ethnic conflicts around the globe, this is fundamentally a fight between brothers: The two tribes—the Hema and the Lendu—speak the same language, marry each other, and compete for the same remote and thickly populated land.”

Somalia is perhaps the signal example of this ubiquitous fratricidal strife. As a Somalian-American professor observed, Somalia can claim a “homogeneity rarely known elsewhere in Africa.” The Somalian people “share a common language (Somali), a religion (Islam), physical characteristics, and pastoral and agropastoral customs and traditions.” This has not tempered violence. On the contrary.

The proposition that violence derives from kith and kin overturns a core liberal belief that we assault and are assaulted by those who are strangers to us. If that were so, the solution would be at hand: Get to know the stranger. Talk with the stranger. Reach out. The cure for violence is better communication, perhaps better education. Study foreign cultures and peoples. Unfortunately, however, our brother, our neighbor, enrages us precisely because we understand him. Cain knew his brother—he “talked with Abel his brother”—and slew him afterward.

We don’t like this truth. We prefer to fear strangers. We like to believe that fundamental differences pit people against one another, that world hostilities are driven by antagonistic principles about how society should be constituted. To think that scale—economic deprivation, for instance—rather than substance divides the world seems to trivialize the stakes. We opt instead for a scenario of clashing civilizations, such as the hostility between Western and Islamic cultures. The notion of colliding worlds is more appealing than the opposite: conflicts hinging on small differences. A “clash” implies that fundamental principles about human rights and life are at risk.

Samuel Huntington took the phrase “clash of civilizations" from the Princeton University historian Bernard Lewis, who was referring to a threat from the Islamic world. “We are facing a mood and a movement far transcending the level of issues and policies,” Lewis wrote in 1990. “This is no less than a clash of civilizations” and a challenge to “our Judeo-Christian heritage.” For Huntington, “the underlying problem for the West is not Islamic fundamentalism. It is Islam, a different civilization.” (…)

Or consider the words of a Hindu nationalist who addressed the conflict with Indian Muslims. How is unity to come about, she asks? “The Hindu faces this way, the Muslim the other. The Hindu writes from left to right, the Muslim from right to left. The Hindu prays to the rising sun, the Muslim faces the setting sun when praying. If the Hindu eats with the right hand, the Muslim with the left. … The Hindu worships the cow, the Muslim attains paradise by eating beef. The Hindu keeps a mustache, the Muslim always shaves the upper lip.”

Yet the preachers, porte-paroles, and proselytizers may mislead; it is in their interest to do so. What divided the Protestants and Catholics in 16th-century France, the Germans and Jews in 20th-century Europe, and the Shia and Sunni today may be small, not large. But minor differences rankle more than large differences. Indeed, in today’s world, it may be not so much differences but their diminution that provokes antagonism. Here it can be useful to attend the literary critic René Girard, who also bucks conventional wisdom by signaling the danger in similitude, not difference: “In human relationships, words like ‘sameness and ‘similarity evoke an image of harmony. If we have the same tastes and like the same things, surely we are bound to get along. But what will happen when we share the same desires?” However, for Girard, “a single principle” pervades religion and literature. “Order, peace, and fecundity depend on cultural distinctions; it is not these distinctions but the loss of them that gives birth to fierce rivalries and sets members of the same family or social group at one another’s throats.”

Likeness does not necessarily lead to harmony. It may elicit jealousy and anger. Inasmuch as identity rests on what makes an individual unique, similitude threatens the self. The mechanism also operates on social terrain. As cultural groups get absorbed into larger or stronger collectives, they become more anxious—and more prone to defend their dwindling identity. French Canadians—living as they do amid an ocean of English speakers—are more testy about their language than the French in France. Language, however, is just one feature of cultural identification.

Assimilation becomes a threat, not a promise. It spells homogenization, not diversity. The assimilated express bitterness as they register the loss of an identity they wish to retain. Their ambivalence transforms their anger into resentment. They desire what they reject and are consequently unhappy with themselves as well as their interlocutor. Resentment feeds protest and sometimes violence. Insofar as the extreme Islamists sense their world imitating the West, they respond with increased enmity. It is not so much the “other” as it is the absence of otherness that spurs anger. They fear losing themselves by mimicking the West. A Miss World beauty pageant in Nigeria spurred widespread riots by Muslims that left hundreds dead. This could be considered a violent rejection of imitation.

We hate the neighbor we are enjoined to love. Why? Why do small disparities between people provoke greater hatred than the large ones? Perhaps the work of Freud helps chart the underground sources of fratricidal violence. Freud introduced the phrase the narcissism of minor differences" to describe this phenomenon. He noted that "it is precisely the little dissimilarities in persons who are otherwise alike that arouse feelings of strangeness and enmity between them.

Freud first broached the narcissism of minor differences in The Taboo of Virginity,” an essay in which he also took up the “dread of woman.” Is it possible that these two notions are linked? That the narcissism of minor differences, the instigator of enmity, arises from differences between the sexes and, more exactly, man’s fear of woman? What do men fear? “Perhaps,” Freud hazards, the dread is “founded on the difference of woman from man.” More precisely, “man fears that his strength will be taken from him by woman, dreads becoming infected with her femininity” and that he will show himself to be a “weakling.” Might this be a root of violence, man’s fear of being unmanned?

The sources of hatred and violence are many, not singular. There is room for the findings of biologists, sociobiologists, and other scientists. For too long, however, social and literary scholars have dwelled on the “other” and its representation. It is interesting, even uplifting, to talk about how we see and don’t see the stranger. It is less pleasant, however, to tackle the divisiveness and rancor of countrymen and kin. We still have not caught up to Montaigne, with his famous remarks about Brazilian cannibals. He reminded his 16th-century readers not only that the mutual slaughter of Huguenots and Catholics eclipsed the violence of New World denizens—it was enacted on the living, and not on the dead—but that its agents were “our fellow citizens and neighbors.”

Russell Jacoby, professor of history at the University of California Los Angeles (UCLA). This essay is adapted from his book Bloodlust: On the Roots of Violence From Cain and Abel to the Present, Bloodlust. Why we should fear our neighbors more than strangers, The Chronicle Review, March 27, 2011.

See also:

Roger Dale Petersen, Understanding Ethnic Violence: fear, hatred, and resentment in twentieth-century Eastern Europe, Cambridge University Press, 2002.
Stephen M. Walt on What Does Social Science Tell Us about Intervention in Libya
Scott Atran on Why War Is Never Really Rational
Steven Pinker on the History and decline of Violence
Violence tag on Lapidarium notes


Pascal Bruckner on the history of Western cult of happiness

                                              Elżbieta Mozyro, Happiness like a butterfly

"Notwithstanding the Jacobin leader Saint-Just’s famous remark, happiness was never “a new idea in Europe.” In fact, it was the oldest of ideas, defended by the ancients and pondered by the great philosophical schools. But Christianity, which inherited the notion from Greek and Latin writers, changed it with a view to transcendence: man’s concern here below must be not joy but salvation. Christ alone redeems us from original sin and puts us on the path to divine truth. All earthly pleasures, according to the Christian authors, are but phantoms from the point of view of celestial beatitude. To wish for earthly happiness would be a sin against the Spirit; the passing pleasures of mortals are nothing compared with the hell that awaits sinners who pant after them.

The eighteenth century

This rigorous conception gave way over the centuries to a more accommodating view of life. The eighteenth century saw the rise of new techniques that improved agricultural production; it also saw new medicines—in particular, alkaloids and salicylic acid, an ancestor of aspirin whose curative and analgesic properties worked wonders. Suddenly, this world was no longer condemned to be a vale of tears; man now had the power to reduce hunger, ameliorate illness, and better master his future. People stopped listening to those who justified suffering as the will of God. If I could relieve pain simply by ingesting some substance, there was no need to have recourse to prayer to feel better.

The new conception of happiness was captured in a phrase of Voltaire’s in 1736: “Earthly paradise is here where I am.” Voltaire was, of course, pursued by the Church and the monarchy; he was threatened with death, and his writings were burned. But his proposition deserves attention. If paradise is here where I am, then happiness is here and now, not yesterday, in an age for which I might be nostalgic, and even less in some hypothetical future. In this upheaval of temporal perspectives, poverty and distress lose all legitimacy, and the whole work of enlightened nations becomes eliminating them through education and reason, and eventually science and industry. Human misfortune would be rendered an archaic residue.

After the American and French Revolutions (the first of which inscribed the pursuit of happiness in its founding document), the right to a decent life and the privileged status of pleasure became the order of the day for progressive movements across Europe. It is true that in the early twentieth century, the Bolsheviks curiously rehabilitated the Christian ideal of sacrifice by exhorting the proletariat to fight and work until the great coming of the Revolution; ironically, asceticism returned within a doctrine that denounced religion as the opiate of the masses and that relentlessly persecuted priests, pastors, and believers wherever it took power. But overall, throughout the twentieth century, hedonism’s claims grew ever stronger under the influence of Freudianism, feminism, and the avant-garde in art and politics.

Capitalism and the rise of individualism

In the 1960s, two major shifts transformed the right to happiness into the duty of happiness. The first was a shift in the nature of capitalism, which had long revolved around production and the deferral of gratification, but now focused on making us all good consumers. Working no longer sufficed; buying was also necessary for the industrial machine to run at full capacity. To make this shift possible, an ingenious invention had appeared not long before, first in America in the 1930s and then in Europe in the 1950s: credit. In an earlier time, anyone who wanted to buy a car, some furniture, or a house followed a rule that now seems almost unknown: he waited, setting aside his nickels and dimes. But credit changed everything; frustration became intolerable and satisfaction normal; to do without seemed absurd. We would live well in the present and pay back later.

The second shift was the rise of individualism. Since nothing opposed our fulfillment any longer—neither church nor party nor social class—we became solely responsible for what happened to us. It proved an awesome burden: if I don’t feel happy, I can blame no one but myself. So it was no surprise that a vast number of fulfillment industries arose, ranging from cosmetic surgery to diet pills to innumerable styles of therapy, all promising reconciliation with ourselves and full realization of our potential. “Become your own best friend, learn self-esteem, think positive, dare to live in harmony,” we were told by so many self-help books, though their very number suggested that these were not such easy tasks. The idea of fulfillment, though the successor to a more demanding ethic, became a demand itself. The dominant order no longer condemns us to privation; it offers us paths to self-realization with a kind of maternal solicitude.

This generosity is by no means a liberation in every respect. In fact, a kind of charitable coercion engenders the malaise from which it then strives to deliver us. The statistics that it publicizes and the models that it holds up produce a new race of guilty parties, no longer sybarites or libertines but killjoys. Sadness is the disease of a society of obligatory well-being that penalizes those who do not attain it. Happiness is no longer a matter of chance or a heavenly gift, an amazing grace that blesses our monotonous days. We now owe it to ourselves to be happy, and we are expected to display our happiness far and wide.

Thus happiness becomes not only the biggest industry of the age but also a new moral order. We now find ourselves guilty of not being well, a failing for which we must answer to everyone and to our own consciences. Consider the poll, conducted by a French newspaper, in which 90 percent of people questioned reported being happy. Who would dare admit that he is sometimes miserable and expose himself to social opprobrium? This is the strange contradiction of the happiness doctrine when it becomes militant and takes on the power of ancient taboos—though in the opposite direction. To enjoy was once forbidden; from now on, it’s obligatory. Whatever method is chosen, whether psychic, somatic, chemical, spiritual, or computer-based, we find the same assumption everywhere: beatitude is within your grasp, and you have only to take advantage of “positive conditioning” (in the Dalai Lama’s words) in order to attain it. We have come to believe that the will can readily establish its power over mental states, regulate moods, and make contentment the fruit of a personal decision.

This belief in our ability to will ourselves happy also lies behind the contemporary obsession with health. What is health, correctly understood, but a kind of permission we receive to live in peace with our bodies and to let ourselves be carefree? These days, though, we are required to resist our mortality as far as possible. (…)

In France, photos of Jean-Paul Sartre and the young Jacques Chirac holding cigarettes have been retouched to eliminate the offending objects—just as the Soviet empire used to do with banished leaders. Yet by trying to remove every anomaly, every failing, we end up denying what is in fact the main benefit of health: indifference to oneself, what a great surgeon once called “the silence of the organs.” Everyone must today be saved from something—from hypertension, from imperfect digestion, from a tendency to gain weight. One is never thin enough, fit enough, strong enough. (…)

Now that it has become the horizon of our democracies, a matter of ceaseless work and effort, happiness is surrounded by anxiety. We feel compelled to be saved constantly from what we are, poisoning our own existence with all kinds of impossible commandments. Our hedonism is not wholesome but haunted by failure. However well behaved we are, our bodies continue to betray us. Age leaves its mark, illness finds us one way or another, and pleasures have their way with us, following a rhythm that has nothing to do with our vigilance or our resolution. (…)

The Western cult of happiness is indeed a strange adventure, something like a collective intoxication. In the guise of emancipation, it transforms a high ideal into its opposite. Condemned to joy, we must be happy or lose all standing in society. It is not a question of knowing whether we are more or less happy than our ancestors; our conception of the thing itself has changed, and we are probably the first society in history to make people unhappy for not being happy.

Pascal Bruckner, French writer and philosopher, Condemned to Joy. The Western cult of happiness is a mirthless enterprise, City Journal, Winter 2011, vol.21, no.1. (Published in March 2011).