Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso



Age of information
Artificial intelligence
Cognition, perception, relativity
Cognitive science
Collective intelligence
Human being
Mind & Brain
Science & Art
Self improvement
The other


A Box Of Stories
Reading Space




Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture


"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes


E. O. Wilson on human evolution, altruism and a ‘new Enlightenment’


“History makes no sense without prehistory, and prehistory makes no sense without biology.”

— E. O. Wilson, Seminars About Long-term Thinking, The Long Now Foundation, Apr 20, 2012.

"Scientific advances are now good enough for us to address coherently questions of where we came from and what we are. But to do so, we need to answer two more fundamental questions. The first is why advanced social life exists in the first place and has occurred so rarely. The second is what are the driving forces that brought it into existence.

A conflict between individual and group-selected traits

"Only the understanding of evolution offers a chance to get a real understanding of the human species. We are determined by the interplay between individual and group selection where individual selection is responsible for much of what we call sin, while group selection is responsible for the greater part of virtue. We’re all in constant conflict between self-sacrifice for the group on the one hand and egoism and selfishness on the other. I go so far as to say that all the subjects of humanities, from law to the creative arts are based upon this play of individual versus group selection. (…) And it is very creative and probably the source of our striving, our inventiveness and imagination. It’s that eternal conflict that makes us unique.

Q: So how do we negotiate this conflict?

E O. W: We don’t. We have to live with it.

Q: Which element of this human condition is stronger?

E O. W: Let’s put it this way: If we would be mainly influenced by group selection, we would be living in kind of an ant society. (…)

Q: What determines which ideology is predominant in a society?

E O. W: If your territory is invaded, then cooperation within the group will be extreme. That’s a human instinct. If you are in a frontier area, however, then we tend to move towards the extreme individual level. That seems to be a good part of the problem still with America. We still think we’re on the frontier, so we constantly try to put forward individual initiative and individual rights and rewards based upon individual achievement. (…)”

Edward O. Wilson, American biologist, researcher (sociobiologybiodiversity), theorist (consiliencebiophilia), naturalist (conservationist) and author, Interview with Edward O. Wilson: The Origin of Morals, Der Spiegel, 2013

Eusociality, where some individuals reduce their own reproductive potential to raise others’ offspring, is what underpins the most advanced form of social organization and the dominance of social insects and humans. One of the key ideas to explain this has been kin selection theory or inclusive fitness, which argues that individuals cooperate according to how they are related. I have had doubts about it for quite a while. Standard natural selection is simpler and superior. Humans originated by multilevel selection—individual selection interacting with group selection, or tribe competing against tribe. We need to understand a great deal more about that. (…)

We should consider ourselves as a product of these two interacting and often competing levels of evolutionary selection. Individual versus group selection results in a mix of altruism and selfishness, of virtue and sin, among the members of a society. If we look at it that way, then we have what appears to be a pretty straightforward answer as to why conflicted emotions are at the very foundation of human existence. I think that also explains why we never seem to be able to work things out satisfactorily, particularly internationally.

Q: So it comes down to a conflict between individual and group-selected traits?

Yes. And you can see this especially in the difficulty of harmonizing different religions. We ought to recognize that religious strife is not the consequence of differences among people. It’s about conflicts between creation stories. We have bizarre creation myths and each is characterized by assuring believers that theirs is the correct story, and that therefore they are superior in every sense to people who belong to other religions. This feeds into our tribalistic tendencies to form groups, occupy territories and react fiercely to any intrusion or threat to ourselves, our tribe and our special creation story. Such intense instincts could arise in evolution only by group selection—tribe competing against tribe. For me, the peculiar qualities of faith are a logical outcome of this level of biological organization.

Q: Can we do anything to counter our tribalistic instincts?

I think we are ready to create a more human-centered belief system. I realize I sound like an advocate for science and technology, and maybe I am because we are now in a techno-scientific age. I see no way out of the problems that organized religion and tribalism create other than humans just becoming more honest and fully aware of themselves. Right now we’re living in what Carl Sagan correctly termed a demon-haunted world. We have created a Star Wars civilization but we have Paleolithic emotions, medieval institutions and godlike technology. That’s dangerous. (…)

I’m devoted to the kind of environmentalism that is particularly geared towards the conservation of the living world, the rest of life on Earth, the place we came from. We need to put a lot more attention into that as something that could unify people. Surely one moral precept we can agree on is to stop destroying our birthplace, the only home humanity will ever have.

Q: Do you believe science will help us in time?

We can’t predict what science is going to come up with, particularly on genuine frontiers like astrophysics. So much can change even within a single decade. A lot more is going to happen when the social sciences finally join the biological sciences: who knows what will come out of that in terms of describing and predicting human behavior? But there are certain things that are almost common sense that we should not do.

Q: What sort of things shouldn’t we do?

Continue to put people into space with the idea that this is the destiny of humanity. It makes little sense to continue exploration by sending live astronauts to the moon, and much less to Mars and beyond. It will be far cheaper, and entail no risk to human life, to explore space with robots. It’s a commonly stated idea that we can have other planets to live on once we have used this one up. That is nonsense. We can find what we need right here on this planet for almost infinite lengths of time, if we take good care of it.

A New Enlightenment

Q: What is it important to do now?

The title of my final chapter is “A New Enlightenment”. I think we ought to have another go at the Enlightenment and use that as a common goal to explain and understand ourselves, to take that self-understanding which we so sorely lack as a foundation for what we do in the moral and political realm. This is a wonderful exercise. It is about education, science, evaluating the creative arts, learning to control the fires of organized religion and making a better go of it.

Q: Could you be more concrete about this new Enlightenment?

I would like to see us improving education worldwide and putting a lot more emphasis—as some Asian and European countries have—on science and technology as part of basic education.”

E. O. Wilson, E. O. Wilson: from altruism to a new Enlightenment, New Scientist, 24 April 2012.

"I think science is now up to the job. We need to be harnessing our scientific knowledge now to get a better, science-based self-understanding.

Q: It seems that, in this process, you would like to throw religions overboard altogether?

E O. W: No. That’s a misunderstanding. I don’t want to see the Catholic Church with all of its magnificent art and rituals and music disappear. I just want to have them give up their creation stories, including especially the resurrection of Christ.

Q: That might well be a futile endeavour …

E O. W: There was this American physiologist who was asked if Mary’s bodily ascent from Earth to Heaven was possible. He said, “I wasn’t there; therefore, I’m not positive that it happened or didn’t happen; but of one thing I’m certain: She passed out at 10,000 meters.” That’s where science comes in. Seriously, I think we’re better off with no creation stories.

Q: With this new Enlightenment, will we reach a higher state of humanity?

E O. W: Do we really want to improve ourselves? Humans are a very young species, in geologic terms, and that’s probably why we’re such a mess. We’re still living with all this aggression and ability to go to war. But do we really want to change ourselves? We’re right on the edge of an era of being able to actually alter the human genome. But do we want that? Do we want to create a race that’s more rational and free of many of these emotions? My response is no, because the only thing that distinguishes us from super-intelligent robots are our imperfect, sloppy, maybe even dangerous emotions. They are what makes us human.”

Edward O. Wilson, American biologist, researcher (sociobiologybiodiversity), theorist (consiliencebiophilia), naturalist (conservationist) and author, Interview with Edward O. Wilson: The Origin of Morals, originally in P. Bethge, J. Grolle, Wir sind ein Schlamassel, Der Spiegel, 8/2013.

A “Social Conquest of the Earth”

"Q: What are some striking examples for you of the legacy of this evolutionary process?

Almost everything. All the way from passion at football games to war to the constant need to suppress selfish behavior that ranges over into criminal behavior to the necessary extolling of altruism by groups, to group approval and reward of people who are heroes or altruists.

Constant turmoil occurs in modern human societies and what I’m suggesting is that turmoil is endemic in the way human advanced social behavior originated in the first place. It’s by group selection that occurred favoring altruism versus individual level selection, which by and large, not exclusively, favor individual and selfish behavior.

We’re hung in the balance. We’ll never reach either one extreme or the other. One extreme would take us to the level of ants and bees and the other would mean that you have dissolution of society.

Q: One point you make in your book is that this highly social kind of behavior that we’ve evolved has allowed us to be part of the social conquest of earth, but it’s also had an unfortunate effect of endangering a lot of the world’s biodiversity. Does that make you pessimistic? If this is just part of the way we’ve evolved, is there going to be any way out of it?

That’s a very big question. In other words, did the pathway that led us to advanced social behavior and conquest make it inevitable that we will destroy most of what we’ve conquered? That is the question of questions.

I’m optimistic. I think that we can pass from conquerors to stewards. We have the intellectual and moral capacity to do it, but I’ve also felt very strongly that we needed a much better understanding of who we are and where we came from. We need answers to those questions in order to get our bearings toward a successful long-term future, that means a future for ourselves, our species and for the rest of life.

I realize that sounds a little bit like it’s coming from a pulpit but basically that’s what I’ve had in my mind. In writing A Social Conquest of Earth, I very much had in mind that need for self-understanding, and I thought we were very far short, and we remain very far short, of self-understanding. We have a kind of resistance toward honest self-understanding as a species and I think that resistance is due in part to our genetic history. And now, can we overcome it? I think so.”

E. O. Wilson, American biologist, researcher in sociobiology, biodiversity, theorist, naturalist and author, interviewed by Carl Zimmer, What Does E.O. Wilson Mean By a “Social Conquest of the Earth”,, March 22, 2012

See also:

Edward O. Wilson “The Social Conquest of Earth”, video, 20 Apr 2012
☞ Richard Dawkins, The descent of Edward Wilson. “A new book on evolution by a great biologist makes a slew of mistakes”, Prospect, May 24, 2012
The Original Colonists, The New York Times, May 11, 2012:
“Mythmaking could never discover the origin and meaning of humanity” — and contemporary philosophy is also irrelevant, having “long ago abandoned the foundational questions about human existence.” The proper approach to answering these deep questions is the application of the methods of science, including archaeology, neuroscience and evolutionary biology. Also, we should study insects.”
Sam Harris on the ‘selfish gene’ and moral behavior
Anthropocene: “the recent age of man”. Mapping Human Influence on Planet Earth, Lapidarium notes
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate
On the Origins of the Arts , Sociobiologist E.O. Wilson on the evolution of culture, Harvard Magazine, May-June, 2013
Anthropology tag on Lapidarium notes

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism

Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries


Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe


Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.


I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium


Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man


"We’re fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. (…)

At the top of the list of things we do that we’re supposed to be doing, and that are at the core of what it is to be human rather than some other sort of animal, are language and music. Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we’re brilliantly capable at absorbing them, and from a young age. That’s how we know we’re meant to be doing them, i.e., how we know we evolved brains for engaging in language and music.

But what if this gets language and music all wrong? What if we’re not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? (…)

I believe that language and music are, indeed, not part of our core—that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved—culturally evolved over millennia—for us. Our brains aren’t shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains.

But how on Earth can one argue for such a view? If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be?

They’d have to possess the auditory structure of…nature. That is, we have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness—to “nature-harness,” as I call it—our brain.

And language and music do nature-harness. (…) The two most important classes of auditory stimuli for humans are (i) events among objects (most commonly solid objects), and (ii) events among humans (i.e., human behavior). And, in my research I have shown that the signature sounds in these two auditory domains drive the sounds we humans use in (i) speech and (ii) music, respectively.

For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book I provide a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book I run through a variety of specific predictions.

Here’s one. If melody is “trying” to sound like the Doppler shifts of a mover—and thereby convey to the auditory system the trajectory of a fictional mover—then a faster mover will have a greater difference between its top and bottom pitch. Does faster music tend to have a wider tessitura? That is, does music with a faster tempo—more beats, or footsteps, per second—tend to have a wider tessitura? Notice that the performer of faster tempo music would ideally like the tessitura to narrow, not widen! But what we found is that, indeed, music having a greater tempo tends to have a wider tessitura, just what one would expect if the meaning of melody is the direction of a mover in your midst.

The preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior!

That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (i.e., of not having their origins via natural selection)…and the giveaway is, ironically, that their shapes are natural (i.e., have the structure of natural auditory events).

We also find this for another core capability that we know we’re not “meant” to do: reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30,000 years), and yet reading has the hallmarks of instinct. As I have argued in my research and in my second book, The Vision Revolution, writing slides so well into our brain because it got shaped by cultural evolution to look “like nature,” and, specifically, to have the signature contour-combinations found in natural scenes (which consists mostly of opaque objects strewn about).

My research suggests that language and music aren’t any more part of our biological identity than reading is. Counterintuitively, then, we aren’t “supposed” to be speaking and listening to music. They aren’t part of our “core” after all.

Or, at least, they aren’t part of the core of Homo sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original Homo sapiens.

So, what is it to be human? Unlike Homo sapiens, we’re grown in a radically different petri dish. Our habitat is filled with cultural artifacts—the two heavyweights being language and music—designed to harness our brains’ ancient capabilities and transform them into new ones.

Humans are more than Homo sapiens. Humans are Homo sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution.

Mark Changizi, an evolutionary neurobiologist, Are We “Meant” to Have Language and Music?, Discover Magazine, March 15th, 2012. (Illustration: Harnessed)

See also:

Mark Changizi, Music Sounds Like Moving People, Science 2.0, Jan 10, 2010.
☞ Mark Changizi, How To Put Art And Brain Together
☞ Mark Changizi, How we read
Mark Changizi on brain’s perception of the world
A brief history of writing, Lapidarium notes
Mark Changizi on Humans, Version 3.0.


The Rise of Complexity. Scientists replicate key evolutionary step in life on earth

         Green cells are undergoing cell death, a cellular division-of-labor—fostering new life.

More than 500 million years ago, single-celled organisms on Earth’s surface began forming multi-cellular clusters that ultimately became plants and animals. (…)

The yeast “evolved” into multi-cellular clusters that work together cooperatively, reproduce and adapt to their environment—in essence, they became precursors to life on Earth as it is today. (…)

The finding that the division-of-labor evolves so quickly and repeatedly in these ‘snowflake’ clusters is a big surprise. (…) The first step toward multi-cellular complexity seems to be less of an evolutionary hurdle than theory would suggest.” (…)

"To understand why the world is full of , including humans, we need to know how one-celled organisms made the switch to living as a group, as multi-celled organisms.” (…)

"This study is the first to experimentally observe that transition," says Scheiner, "providing a look at an event that took place hundreds of millions of years ago." (…)

The scientists chose Brewer’s yeast, or Saccharomyces cerevisiae, a species of yeast used since ancient times to make bread and beer because it is abundant in nature and grows easily.

They added it to nutrient-rich culture media and allowed the cells to grow for a day in test tubes.

Then they used a centrifuge to stratify the contents by weight.

As the mixture settled, cell clusters landed on the bottom of the tubes faster because they are heavier. The biologists removed the clusters, transferred them to fresh media, and agitated them again.

    First steps in the transition to multi-cellularity: ‘snowflake’ yeast with dead cells stained red.

Sixty cycles later, the clusters—now hundreds of cells—looked like spherical snowflakes.

Analysis showed that the clusters were not just groups of random cells that adhered to each other, but related cells that remained attached following cell division.

That was significant because it meant that they were genetically similar, which promotes cooperation. When the clusters reached a critical size, some cells died off in a process known as apoptosis to allow offspring to separate.

The offspring reproduced only after they attained the size of their parents. (…)

     Multi-cellular yeast individuals containing central dead cells, which promote reproduction.

"A cluster alone isn’t multi-cellular," William Ratcliff says. "But when cells in a cluster cooperate, make sacrifices for the common good, and adapt to change, that’s an evolutionary transition to multi-cellularity."

In order for multi-cellular organisms to form, most cells need to sacrifice their ability to reproduce, an altruistic action that favors the whole but not the individual. (…)

For example, all cells in the human body are essentially a support system that allows sperm and eggs to pass DNA along to the next generation.

Thus multi-cellularity is by its nature very cooperative.

"Some of the best competitors in nature are those that engage in cooperation, and our experiment bears that out. (…)

Evolutionary biologists have estimated that multi-cellularity evolved independently in about 25 groups.”

Scientists replicate key evolutionary step in life on earth, Physorg, Jan 16, 2012.

Evolution: The Rise of Complexity

"Let’s rewind time back about 3.5 billion years. Our beloved planet looks nothing like the lush home we know today – it is a turbulent place, still undergoing the process of formation. Land is a fluid concept, consisting of molten lava flows being created and destroyed by massive volcanoes. The air is thick with toxic gasses like methane and ammonia which spew from the eruptions. Over time, water vapor collects, creating our first weather events, though on this early Earth there is no such thing as a light drizzle. Boiling hot acid rain pours down on the barren land for millions of years, slowly forming bubbling oceans and seas. Yet in this unwelcoming, violent landscape, life begins.

The creatures which dared to arise are called cyanobacteria, or blue-green algae. They were the pioneers of photosynthesis, transforming the toxic atmosphere by producing oxygen and eventually paving the way for the plants and animals of today. But what is even more incredible is that they were the first to do something extraordinary – they were the first cells to join forces and create multicellular life. (…)

William Ratcliff and his colleagues at the University of Minnesota. In a PNAS paper published online this week, they show how multicellular yeast can arise in less than two months in the lab. (…)

All of their cultures went from single cells to snowflake-like clumps in less than 60 days. “Although known transitions to complex multicellularity, with clearly differentiated cell types, occurred over millions of years, we have shown that the first crucial steps in the transition from unicellularity to multicellularity can evolve remarkably quickly under appropriate selective conditions,” write the authors. These clumps weren’t just independent cells sticking together for the sake of it – they acted as rudimentary multicellular creatures. They were formed not by random cells attaching but by genetically identical cells not fully separating after division. Furthermore, there was division of labor between cells. As the groups reached a certain size, some cells underwent programmed cell death, providing places for daughter clumps to break from. Since individual cells acting as autonomous organisms would value their own survival, this intentional culling suggests that the cells acted instead in the interest of the group as a whole organism.

Given how easily multicellular creatures can arise in test tubes, it might then come as no surprise that multicellularity has arisen at least a dozen times in the history of life, independently in bacteria, plants and of course, animals, beginning the evolutionary tree that we sit atop today. Our evolutionary history is littered with leaps of complexity. While such intricacies might seem impossible, study after study has shown that even the most complex structures can arise through the meandering path of evolution. In Evolution’s Witness, Ivan Schwab explains how one of the most complex organs in our body, our eyes, evolved. (…)

Eyes are highly intricate machines that require a number of parts working together to function. But not even the labyrinthine structures in the eye present an insurmountable barrier to evolution.

Our ability to see began to evolve long before animals radiated. Visual pigments, like retinal, are found in all animal lineages, and were first harnessed by prokaryotes to respond to changes in light more than 2.5 billion years ago. But the first complex eyes can be found about 540 million years ago, during a time of rapid diversification colloquially referred to as the Cambrian Explosion. It all began when comb jellies, sponges and jellyfish, along with clonal bacteria, were the first to group photoreceptive cells and create light-sensitive ‘eyespots’. These primitive visual centers could detect light intensity, but lacked the ability to define objects. That’s not to say, though, that eyespots aren’t important – eyespots are such an asset that they arose independently in at least 40 different lineages. But it was the other invertebrate lineages that would take the simple eyespot and turn it into something incredible.

According to Schwab, the transition from eyespot to eye is quite small. “Once an eyespot is established, the ability to recognize spatial characteristics – our eye definition – takes one of two mechanisms: invagination (a pit) or evagination (a bulge).” Those pits or bulges can then be focused with any clear material forming a lens (different lineages use a wide variety of molecules for their lenses). Add more pigments or more cells, and the vision becomes sharper. Each alteration is just a slight change from the one before, a minor improvement well within bounds of evolution’s toolkit, but over time these small adjustments led to intricate complexity.

In the Cambrian, eyes were all the rage. Arthropods were visual trendsetters, creating compound eyes by using the latter approach, that of bulging, then combining many little bulges together. One of the era’s top predators, Anomalocaris, had over 16,000 lenses! So many creatures arose with eyes during the Cambrian that Andrew Parker, a visiting member of the Zoology Department at the University of Oxford, believes that the development of vision was the driver behind the evolutionary explosion. His ‘Light-Switch’ hypothesis postulates that vision opened the doors for animal innovation, allowing rapid diversification in modes and mechanisms for a wide set of ecological traits. Even if eyes didn’t spur the Cambrian explosion, their development certainly irrevocably altered the course of evolution.

                     Fossilized compound eyes from Cambrian arthropods (Lee et al. 2011)

Our eyes, as well as those of octopuses and fish, took a different approach than those of the arthropods, putting photo receptors into a pit, thus creating what is referred to as a camera-style eye. In the fossil record, eyes seem to emerge from eyeless predecessors rapidly, in less than 5 million years. But is it really possible that an eye like ours arose so suddenly? Yes, say biologists Dan-E. Nilsson and Susanne Pelger. They calculated a pessimistic guess as to how long it would take for small changes – just 1% improvements in length, depth, etc per generation – to turn a flat eyespot into an eye like our own. Their conclusion? It would only take about 400,000 years – a geological instant.

How does complexity arise in the first place

But how does complexity arise in the first place? How did cells get photoreceptors, or any of the first steps towards innovations such as vision? Well, complexity can arise a number of ways.

Each and every one of our cells is a testament to the simplest way that complexity can arise: have one simple thing combine with a different one. The powerhouses of our cells, called mitochondria, are complex organelles that are thought to have arisen in a very simple way. Some time around 3 billion years ago, certain bacteria had figured out how to create energy using electrons from oxygen, thus becoming aerobic. Our ancient ancestors thought this was quite a neat trick, and, as single cells tend to do, they ate these much smaller energy-producing bacteria. But instead of digesting their meal, our ancestors allowed the bacteria to live inside them as an endosymbiont, and so the deal was struck: our ancestor provides the fuel for the chemical reactions that the bacteria perform, and the bacteria, in turn, produces ATP for both of them. Even today we can see evidence of this early agreement – mitochondria, unlike other organelles, have their own DNA, reproduce independently of the cell’s reproduction, and are enclosed in a double membrane (the bacterium’s original membrane and the membrane capsule used by our ancestor to engulf it).

Over time the mitochondria lost other parts of their biology they didn’t need, like the ability to move around, blending into their new home as if they never lived on their own. The end result of all of this, of course, was a much more complex cell, with specialized intracellular compartments devoted to different functions: what we now refer to as a eukaryote.

Complexity can arise within a cell, too, because our molecular machinery makes mistakes. On occasion, it duplicates sections of DNA, entire genes, and even whole chromosomes, and these small changes to our genetic material can have dramatic effects. We saw how mutations can lead to a wide variety of phenotypic traits when we looked at how artificial selection has shaped dogs. These molecular accidents can even lead to complete innovation, like the various adaptations of flowering plants that I talked about in my last Evolution post. And as these innovations accumulate, species diverge, losing the ability to reproduce with each other and filling new roles in the ecosystem. While the creatures we know now might seem unfathomably intricate, they are the product of billions of years of slight variations accumulating.

Of course, while I focused this post on how complexity arose, it’s important to note that more complex doesn’t necessarily mean better. While we might notice the eye and marvel at its detail, success, from the viewpoint of an evolutionary lineage, isn’t about being the most elaborate. Evolution only leads to increases in complexity when complexity is beneficial to survival and reproduction.

Indeed, simplicity has its perks: the more simple you are, the faster you can reproduce, and thus the more offspring you can have. Many bacteria live happy simple lives, produce billions of offspring, and continue to thrive, representatives of lineages that have survived billions of years. Even complex organisms may favor less complexity – parasites, for example, are known for their loss of unnecessary traits and even whole organ systems, keeping only what they need to get inside and survive in their host. Darwin referred to them as regressive for seemingly violating the unspoken rule that more complex arises from less complex, not the other way around. But by not making body parts they don’t need, parasites conserve energy, which they can invest in other efforts like reproduction.

When we look back in an attempt to grasp evolution, it may instead be the lack of complexity, not the rise of it, that is most intriguing.”

See also:

Scientists recreate evolution of complexity using ‘molecular time travel’
Nature Has A Tendency To Reduce Complexity
Emergence and Complexity - prof. Robert Sapolsky’s lecture, Stanford University (video)


Scientists recreate evolution of complexity using ‘molecular time travel’  


Much of what living cells do is carried out by “molecular machines” – physical complexes of specialized proteins working together to carry out some biological function. (…)

In a study published early online on January 8, in Nature, a team of scientists from the University of Chicago and the University of Oregon demonstrate how just a few small, high-probability mutations increased the complexity of a molecular machine more than 800 million years ago. By biochemically resurrecting ancient genes and testing their functions in modern organisms, the researchers showed that a new component was incorporated into the machine due to selective losses of function rather than the sudden appearance of new capabilities.

"Our strategy was to use ‘molecular time travel’ to reconstruct and experimentally characterize all the proteins in this molecular machine just before and after it increased in complexity," said the study’s senior author Joe Thornton, PhD, professor of human genetics and & ecology at the University of Chicago, professor of biology at the University of Oregon, and an Early Career Scientist of the Howard Hughes Medical Institute.

"By reconstructing the machine’s components as they existed in the deep past," Thornton said, "we were able to establish exactly how each protein’s function changed over time and identify the specific genetic mutations that caused the machine to become more elaborate." (…)

To understand how the ring increased in complexity, Thornton and his colleagues “resurrected” the ancestral versions of the ring proteins just before and just after the third subunit was incorporated. To do this, the researchers used a large cluster of computers to analyze the gene sequences of 139 modern-day ring proteins, tracing evolution backwards through time along the Tree of Life to identify the most likely ancestral sequences. They then used biochemical methods to synthesize those ancient genes and express them in modern yeast cells. (…)

Thornton’s research group has helped to pioneer this molecular time-travel approach for single genes; this is the first time it has been applied to all the components in a .

The group found that the third component of the ring in Fungi originated when a gene coding for one of the subunits of the older two-protein ring was duplicated, and the daughter genes then diverged on their own evolutionary paths.

The pre-duplication ancestor turned out to be more versatile than either of its descendants: expressing the ancestral gene rescued modern yeast that otherwise failed to grow because either or both of the descendant ring protein genes had been deleted. In contrast, each resurrected gene from after the duplication could only compensate for the loss of a single ring protein gene.

The researchers concluded that the functions of the ancestral protein were partitioned among the duplicate copies, and the increase in complexity was due to complementary loss of ancestral functions rather than gaining new ones. By cleverly engineering a set of ancestral proteins fused to each other in specific orientations, the group showed that the duplicated proteins lost their capacity to interact with some of the other ring proteins. Whereas the pre-duplication ancestor could occupy five of the six possible positions within the ring, each duplicate gene lost the capacity to fill some of the slots occupied by the other, so both became obligate components for the complex to assemble and function.

"It’s counterintuitive but simple: complexity increased because protein functions were lost, not gained," Thornton said. "Just as in society, complexity increases when individuals and institutions forget how to be generalists and come to depend on specialists with increasingly narrow capacities." (…)

"The mechanisms for this increase in complexity are incredibly simple, common occurrences," Thornton said. "Gene duplications happen frequently in cells, and it’s easy for errors in copying to DNA to knock out a protein’s ability to interact with certain partners. It’s not as if evolution needed to happen upon some special combination of 100 mutations that created some complicated new function."

Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today. Such a mechanism argues against the intelligent design concept of “irreducible complexity,” the claim that molecular machines are too complicated to have formed stepwise through evolution.

"I expect that when more studies like this are done, a similar dynamic will be observed for the evolution of many molecular complexes," Thornton said.

"These really aren’t like precision-engineered machines at all," he added. "They’re groups of molecules that happen to stick to each other, cobbled together during evolution by tinkering, degradation, and good luck, and preserved because they helped our ancestors to survive."

Scientists recreate evolution of complexity using ‘molecular time travel’, Physorg, Jan 8m, 2011. (Illustration: Oak Ridge National Laboratory)

See also:

Nature Has A Tendency To Reduce Complexity
The Rise of Complexity. Scientists replicate key evolutionary step in life on earth
The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life
Uncertainty principle: How evolution hedges its bets
Culture gene coevolution of individualism - collectivism
Genetics tag at Lapidarium notes


Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers

A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity

Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity,, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’


Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language

                                        Jamie Marie Waelchli, Thought Map No. 8

Human language, coupled with human maternal care, enables the consciousness to bifurcate very early and extensively. Without the self-reflective properties inherent in a reflexive agent- recipient language, and without the objectification of the human infant — a very different kind of humanity would arise.

Human consciousness, as constructed by human language, becomes the vehicle through which the self-reflective human mind envisions time. Language enables the viewer to reflect upon the actions of the doer (and the actions of one’s internal body), while projecting forward and backward — other possible bodily actions — into imagined space/time. Thus the projected and imagined space/time increasingly becomes the conscious world and reality of the viewer who imagines or remembers actions mapped onto that projected plan. The body thus becomes a physical entity progressing through the imaged world of the viewer. As the body progresses through this imaged world, the viewer also constructs a way to mark progress from one imagined event to another. Having once marked this imagined time into units, the conscious viewer begins to order the anticipated actions of the body into a linear progression of events.

A personal narrative then arises through the vehicle of language. Indeed a personal narrative is required, expected and placed upon every human being, by the very nature of human language. This personal narrative becomes organized around the anticipated bodily changes that it is imagined will take place from birth to old age. The power of the bifurcated mind, through linguistically encoded expectancies, shapes and molds all of human behavior. When these capacities are jointly executed by other similar minds — the substrate of human culture is manufactured.

Human culture, because it rides upon a manufactured space/time self-reflective substrate, is unique. Though it shares some properties with animal culture, it is not merely a natural Darwinian extension of animal culture. It is based on constructed time/space, constructed mental relationships, constructed moral responsibilities, and constructed personal narratives — and individuals, must, at all times, justify their actions toward another on the basis of their co-constructed expectancies.

Human Consciousness seems to burst upon the evolutionary scene in something of an explosion between 40,000 and 90,000 years ago. Trading emerges, art emerges, and symboling ability emerges with a kind of intensity not noted for any previous time in the archeological record. (…)

Humans came with a propensity to alter the world around them wherever they went. We were into object manipulation in all aspects of our existence, and wherever we went we altered the landscape. We did not accept the natural world as we found it — we set about refashioning our worlds according to our own needs and desires. From the simple act of intentionally setting fires to eliminate underbrush, to the exploration of outer space, humanity manifested the view that it was here to control its own destiny, by changing the world around it, as well as by individuals’ changing their own appearances.

We put on masks and masqueraded about the world, seeking to make the world conform to our own desires, in a way no other species emulated. In brief, the kind of language that emerged between 40,000 and 90,000 years ago, riding upon the human anatomical form, changed us forever, and we began to pass that change along to future generations.

While Kanzi and family are bonobos, the kind of language they have acquired — even if they have not manifested all major components yet — is human language as you and I speak it and know it. Therefore, although their biology remains that of apes, their consciousness has begun to change as a function of the language, the marks it leaves on their minds and the epigenetic marks it leaves on the next generation. (Epigenetic: chemical markers which become attached to segments of genes during the lifetime of an individual are passed along to future generations, affecting which genes will be expressed in succeeding generations.) They explore art, they explore music, they explore creative linguistic negotiation, they have an autobiographical past and they think about the future. They don’t do all these things with human-like proficiency at this point, but they attempt them if given opportunity. Apes not so reared do not attempt to do these things.

What kind of power exists within the kind of language we humans have perfected? Does it have the power to change biology across time, if it impacts the biological form upon conception? Science has now become aware of the power of initial conditions, through chaos theory, the work of Mandelbrot with fractal geometric forms, and the work of Wolfram and the patterns that can be produced by digital reiterations of simple and only slightly different starting conditions. Within the fertilized egg lie the initial starting conditions of every human.

We also now realize that epigenetic markers from parental experience can set these initial starting conditions, determining such things as the order, timing, and patterning of gene expression profiles in the developing organism. Thus while the precise experience and learning of the parents is not passed along, the effects of those experiences, in the form of genetic markers that have the power to affect the developmental plan of the next generation during the extraordinarily sensitive conditions of embryonic development, are transmitted. Since language is the most powerful experience encountered by the human being and since those individuals who fail to acquire human language are inevitably excluded from (or somehow set apart in) the human community, it is reasonable to surmise that language will, in some form, transmit itself through epigenetic mechanisms.

When a human being enters into a group of apes and begins to participate in the rearing of offspring, different epigenetic markers have the potential to become activated. We already know, for example, that in human beings, expectancies or beliefs can affect gene activity. The most potent of the epigenetic markers would most probably arise from the major difference between human and ape infants. Human infants do not cling, ape infants do. When ape infants are carried like human infants, they begin to development eye/hand coordination from birth. This sets the developmental trajectory of the ape infant in a decidedly human direction — that of manipulating the world around it. Human mothers, unlike ape mothers, also communicate their intentions linguistically to the infant. Once an intention is communicated linguistically, it can be negotiated, so there arises an intrinsic motivation to tune into and understand such communications on the part of the ape infant. The ‘debate’ in ape language, which has centered around do they have or don’t they — has missed the point. This debate has ignored the key rearing variables that differ dramatically across the studies. Apart from Kanzi and family, all other apes in these studies are left alone at night and drilled on associative pairings during the day.”

Sue Savage-Rumbaugh, is a primatologist most known for her work with two bonobos, Kanzi and Panbanisha, investigating their use of “Great Ape language” using lexigrams and computer-based keyboards. Until recently based at Georgia State University’s Language Research Center in Atlanta.

To read full essay click Human Language—Human Consciousness, National Humanities Center, Jan 2nd, 2011

See also:

John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices
Do thoughts have a language of their own? The language of thought hypothesis, Lapidarium notes


The Genographic Project ☞ A Landmark Study of the Human Journey 

                                       (Click image to explore Atlas of Human Journey)

Human Migration, Population Genetics, Maps, DNA.

"Where do you really come from? And how did you get to where you live today? DNA studies suggest that all humans today descend from a group of African ancestors who—about 60,000 years ago—began a remarkable journey.

The Genographic Project is seeking to chart new knowledge about the migratory history of the human species by using sophisticated laboratory and computer analysis of DNA contributed by hundreds of thousands of people from around the world. In this unprecedented and of real-time research effort, the Genographic Project is closing the gaps of what science knows today about humankind’s ancient migration stories.

The Genographic Project is a multi-year research initiative led by National Geographic Explorer-in-Residence Dr. Spencer Wells. Dr. Wells and a team of renowned international scientists and IBM researchers, are using cutting-edge genetic and computational technologies to analyze historical patterns in DNA from participants around the world to better understand our human genetic roots.”

                                       (Click image to explore Globe of Human History)

The Genographic Project - Human Migration, Population Genetics, Maps, DNA, National Geographic

The Genographic Project - Introduction


See also:

Evolution of Language tested with genetic analysis


How Epicurus’ ideas survived through Lucretius’ poetry, and led to toleration

                                    Illustration:  Oxford: Anthony Stephens, 1683

Hunc igitur terrorem animi tenebrasque necessest
non radii solis neque lucida tela diei
discutiant, sed naturae species ratioque.

"Therefore it is necessary that neither the rays of the sun nor the shining spears of Day should shatter this terror and darkness of the mind, but the aspect and reason of nature."

— Lucretius, De Rerum Natura (On the Nature of Things), Book I, line 90-93.

As Greenblatt describes it, Lucretius (borrowing from Democritus and others), says [more than 2,000 years ago] the universe is made of an infinite number of atoms:

"Moving randomly through space, like dust motes in a sunbeam, colliding, hooking together, forming complex structures, breaking apart again, in a ceaseless process of creation and destruction. There is no escape from this process. (…) There is no master plan, no divine architect, no intelligent design.

All things, including the species to which you belong, have evolved over vast stretches of time. The evolution is random, though in the case of living organisms, it involves a principle of natural selection. That is, species that are suited to survive and to reproduce successfully, endure, at least for a time; those that are not so well suited, die off quickly. But nothing — from our own species, to the planet on which we live, to the sun that lights our day — lasts forever. Only the atoms are immortal.”

— cited in Lucretius, Man Of Modern Mystery, NPR, Sep 19, 2011

””On the Nature of Things,” a poem written 2,000 years ago that flouted many mainstream concepts, helped the Western world to ease into modernity. (…)

Harvard literary scholar Stephen Greenblatt has proposed a sort of metaphor for how the world became modern. An ancient Roman poem, lost for 1,000 years, was recovered in 1417. Its presciently modern ideas — that the world is made of atoms, that there is no life after death, and that there is no purpose to creation beyond pleasure — dropped like an atomic bomb on the fixedly Christian culture of Western Europe.

But this poem’s radical and transformative ideas survived what could have been a full-blown campaign against it, said Greenblatt. (…) One reason is that it was art. A tract would have drawn the critical attention of the authorities, who during the Renaissance still hewed to Augustine’s notion that Christian beliefs were “unshakeable, unchangeable, coherent.”

The ancient poem that contained such explosive ideas, and that packaged them so pleasingly, was “On the Nature of Things” (“De Rerum Natura”) by Roman poet and philosopher Titus Lucretius Carus, who died five decades before the start of the Christian era. Its intent was to counter the fear of death and the fear of the supernatural. Lucretius rendered into poetry the ideas of Epicurus, a Greek philosopher who had died some 200 years earlier. Both men embraced a core idea: that life was about the pursuit of pleasure and the avoidance of pain. (…)

Among the most stunning ideas Lucretius promoted in his poem was that the world is made of atoms, imperishable bits of matter he called “seeds.” All the rest was void — nothingness. Atoms never disappeared, but were material grist for the world’s ceaseless change, without any creator or design or afterlife.

These ideas, “drawn from a defunct pagan past,” were intolerable in 15th-century Europe, said Greenblatt, so much so that for the next 200 years they had to survive every “formal and informal mechanism of aversion and repression” of the age.

“A few wild exceptions” embraced this pagan past explicitly, said Greenblatt, including Dominican friar Giordano Bruno, whose “fatal public advocacy” of Lucretius came to an end in 1600. Branded a pantheist, he was imprisoned, tortured, and burned at the stake.

But the poem itself, a repository of intolerable ideas, was allowed to circulate. How was this so?

Greenblatt offered three explicit reasons:

Reading strategies. In the spirit of commonplace books, readers of that era focused on individual passages rather than larger (and disturbing) meanings. Readers preferred to see the poem as a primer on Latin and Greek grammar, philology, natural history, and Roman culture.

— Scholarship. Official commentaries on the text were not intended to revive the radical ideas of Lucretius, but to put the language and imagery of a “dead work” in context, “a homeostatic survival,” said Greenblatt, “to make the corpse accessible.” He showed an image from a 1511 scholarly edition of the poem, in which single lines on each page lay “like a cadaver on a table,” surrounded by elaborate scholarly text. But the result was still preservation. “Scholarship,” he said, “is rarely credited properly in the history of toleration.”

— Aesthetics. A 1563 annotated edition of the poem acknowledged that its precepts were alien to Christian belief, but “it is no less a poem.”

“Certainly almost every one of the key principles was an offense to right-thinking Christians,” said Greenblatt. “But the poetry was compellingly, stunningly beautiful.”

Its “immensely seductive form,” he said — the soul of tolerance — helped to make aesthetics the concept that bridged the gap between the Renaissance and the early modern age.

Michel de Montaigne, the 16th-century French nobleman who invented the art of the essay, helped to maintain that aesthetic thread. His work includes almost 100 quotations from Lucretius. It was explicitly aesthetic appreciation of the old Roman, said Greenblatt, despite Montaigne’s own “genial willingness to submit to Christian orthodoxy.”

In the end, Lucretius and the ideas he borrowed from Epicurus survived because of art. “That aesthetic dimension of the ancient work (…) was the key element in the survival and transmission of what was perceived (…) by virtually everyone in the world to be intolerable,” said Greenblatt. “The thought police were only rarely called in to investigate works of art.”

One irony abides. Epicurus himself was known to say, “I spit on poetry,” yet his ideas only survive because of it. Lucretius saw his art as “honey smeared around the lip of a cup,” said Greenblatt, “that would enable readers to drink it down.”

The Roman poet thought there was no creator or afterlife, but that “should not bring with it a cold emptiness,” said Greenblatt. “It shouldn’t be only the priests of the world, with their delusions, who could convey to you that feeling of the deepest wonder.””

— Corydon Ireland, Through artistry, toleration, Harvard Gazette, Oct 31, 2011

See also:

☞ Lucretius, On the Nature of Things (1st century B.C.), History of Science Online

"In De rerum natura (On the Nature of Things), the poet Lucretius (ca. 50 BC) fused the atomic theory of Democritus and Leucippus with the philosophy of Epicurus in order to argue against the existence of the gods. While ordinary humans might fear the thunderbolts of Jove or torments in the underworld after death, Lucretius advised his readers to take courage in the knowledge that death is merely a dissolution of the body, as atoms combine and reassemble according to chance as they move through the void. Against the Stoics, Aristotelians, and Neoplatonists, Lucretius argued for a mechanistic universe governed by chance. He also argued for a plurality of worlds (and these planets, like the Earth, need not be spherical) and a non-hierarchical universe. Despite the paucity of ancient readers persuaded by Lucretius’ arguments, his work was almost universally admired as a masterful example of Latin style.”

Titus Lucretius Carus (ca. 99 BCE – ca. 55 BCE) was a Roman poet and philosopher.

See also:

Stephen Greenblatt, The Answer Man, The New Yorker, Aug 8, 2011
Lucretius, Man Of Modern Mystery, NPR, Sep 19, 2011
☞ Christian Flow, Swerves, Harvard Magazine Jul-Aug 2011
Lucretius on the infinite universe, the beginning of things and the likelihood of extraterrestrial life, Lapidarium
Lucretius: ‘O unhappy race of men, when they ascribed actions to the gods’, Lapidarium


Kevin Kelly on information, evolution and technology: ‘The essence of life is not energy but ideas’


"Technology’s dominance ultimately stems not from its birth in human minds but from its origin in the same self-organization that brought galaxies, planets, life, and minds into existence. It is part of a great asymmetrical arc that begins at the big bang and extends into ever more abstract and immaterial forms over time. The arc is the slow yet irreversible liberation from the ancient imperative of matter and energy.”

Kevin Kelly, What Technology Wants, New York: Viking, The Penguin Group, 2010

"The best way to understand the manufactured world is not to see it as a work of human imagination only, but to see it as an extension of the biological world. Most of us walk around with a strict mental dichotomy between the natural world of genes and the artificial world of concrete and code. When we actually look at how evolution works, the distinction begins to break down. The defining force behind life is not energy but information. Evolution is a process of information transmission, and so is technology, which is why it too reflects a biological transcendence.

Q: You have described technology as the “seventh kingdom of life” – which is a very ontological description – and as “the accumulation of ideas” – which is an epistemological description. Are the two converging?

Kelly: I take a very computational view of life and evolution. If you look at the origins of life and the forces of evolution, they arevery intangible. Life is built on bits, on ideas, on information, on immaterial things. The technology sphere we have made – which is what I call the Technium – consists of information as well. We can take a number of atoms and arrange them in such a way as to maximize their usefulness – for example by creating a cell phone. When we think about who we are, we are always talking about information, about knowledge, about processes that increase the complexity of things. (…)

I am a critic of those who say that the internet has become a sentient and living being. But while the internet is not conscious like an organism, it exhibits some lifelike qualities. Life is not a binary thing that is either there or not there. It is a continuum between semi-living things like viruses and very living things like us. What we are seeing right now is an increased “lifeness” in technology as we move across the continuum. As things become more complex, they become more lifelike. (…)

One of the problems for biologists right now is to distinguish between random and organized processes. If we want to think coherently about the relationship between biology and technology, we need good working definitions to outline the edges of the spectrum of life that we are investigating. One of the ways to do that is to create artificial life and then debate whether we have crossed a threshold. I think we are beginning to see actual evolution in technology because the similarities to natural evolution are so large that it has become hard to ignore them. (…)

I think that the essence of life is natural and subject to the investigation by reason. Quantum physics is science, but it is so far removed from our normal experience that the investigation becomes increasingly difficult. Not everyone might understand it, but collectively we can. One of the reasons we want to build artificial intelligence is to supplement our human intelligence, because we may require other kinds of thinking to understand these mysteries Technology is a way to manufacture types of thinking that don’t yet exist. (…)

Innovation always has unintended consequences. Every new invention creates new solutions, but it also creates almost as many new problems. I tend to think that technology is not really powerful unless it can be powerfully abused. The internet is a great example of that: It will be abused, there will be very significant negative consequences. Even the expansion of choices itself has unintended consequences. Barry Schwartz calls it the “paradox of choice”: Humans have evolved with a limited capacity for making decisions. We can be paralyzed by choice! (…)

Most of the problems today have been generated by technology, and most future problems will be generated by technology as well. I am so technocentric that I say: The solution to technological problems is more technology. Here’s a tangible example: If I throw around some really bad ideas in this interview, you won’t counsel me to stop thinking. You will encourage me to think more and come up with better ideas. Technology is a way of thinking. The proper response to bad technology is not less, but more and better technology. (…)

I always think of technology as a child: You have to work with it, you have to find the right role and keep it away from bad influences. If you tell your child, “I will disown you if you become a lawyer”, that will almost guarantee that they become a lawyer. Every technology can be weaponized. But the way to stop that is not prohibition but an embrace of that technology to steer its future development. (…)

I am not a utopian who believes that technology will solve our problems. I am a protopian, I believe in gradual progress. And I am convinced that much of that progress is happening outside of our control. In nature, new species fill niches that can be occupied and inhabited. And sometimes, these niches are created by previous developments. We are not really in control of those processes. The same is true for innovation: There is an innate bias in the Technium that makes certain processes inevitable. (…)

I use the term the same way you would describe adolescence as the inevitable step between childhood and adulthood. We are destined by the physics and chemistry of matter. If we looked at a hundred planets in the universe that were inhabited by intelligent life, I bet that we would eventually see something like the internet on almost all of them. But can we find exceptions? Probably. (…)

Q: Is innovation a process that can continue indefinitely? Or does the infinite possibility space eventually run against the constraints of a world with finite resources and finite energy?

Kelly: I don’t believe in omega points. One of the remarkable things about life is that evolution does not stop. It always finds new paths forwards and new niches to occupy. As I said before, the essence of life is not energy but ideas. If there are limits to how many ideas can exist within a brain or within a system, we are still very far away from those limits. (…)

Long before we reach a saturation point, we will evolve into something else. We invented our humanity, and we can reinvent ourselves with genetic engineering or other innovations. We might even fork into a species that embraces speedy development and a species that wants no genetic engineering.

Q: You are advocating a very proactive approach to issues like genetic enhancements and human-technological forms of symbiosis, yet you also stress the great potential for abuse, for ethical problems and for unintended consequences.

Kelly: Yes, we are steamrolling ahead. The net gain will slightly outweigh the negative aspects. That is all we need: A slightly greater range of choices and opportunities every year equals progress. (…)

For the past ten thousand years, technological progress has on average enabled our opportunities to expand. The easiest way to demonstrate the positive arc of progress is to look at the number of people today who would want to live in an earlier time. Any of us could sell all material possessions within days and live like a caveman. I have written on the Amish people, and I have lived with native tribes, so I understand the attractions of that lifestyle. It’s a very supportive and grounded reality. But the cost of that experience is the surrender of all the other choices and opportunities we now enjoy. (…)

My point about technology is that every person has a different set of talents and abilities. The purpose of technology is to provide us with tools to maximize our talents and explore our opportunities. The challenge is to make use of the tools that fit us. Your technology can be different from my technology because our talents and interests are different. If you look at the collective, you might think that we are all becoming more alike. But when you go down to the individual level, technology has the potential to really bring out the differences that make us special. Innovation enables individualization. (…)

Q: Is the internet increasing our imaginative or innovative potential?

Kelly: That is a good point. A lot of these impossibilities happen within collective or globalist structures. We can do things that were completely impossible during the industrial age because we can now transcend our individual experience. (…)

Q: The industrial age made large-scale production possible, now we see large-scale collaboration. What is the next step?

Kelly: I love that question. What is the next stage? I think we are decades or centuries away from a global intelligence, but that would be another phase of human development. If you could generate thoughts on a planetary scale, if we moved towards singularity, that would be huge.

Q: The European: The speed of change leaves room for optimism.

Kelly: My optimism is off the chart. I got it from Asia, where I saw how quickly civilizations could move from abject poverty to incredible wealth. If they can do it, almost anything is possible. Let me go back to the original quote about seeing God in a cell phone: The reason we should be optimistic is life itself. It keeps bouncing back even when we do horrible things to it. Life is brimming with possibilities, details, intelligence, marvels, ingenuity. And the Technium is very much an extension of that possibility space.”

Kevin Kelly, writer, photographer, conservationist, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, "My Optimism Is Off The Chart", The European Magazine, 20.09.2011 (Illustration: Seashells from Okinawa by Thomas Schmall)

See also:

Kevin Kelly on Technology, or the Evolution of Evolution
Kevin Kelly on Why the Impossible Happens More Often
Kevin Kelly on the Satisfaction Paradox
Technology tag on Lapidarium
Technology tag on Lapidarium notes


Why Does Beauty Exist? Jonah Lehrer: ‘Beauty is a particularly potent and intense form of curiosity’

                                Interwoven Beauty by John Lautermilch


"Here’s my (extremely speculative) theory: Beauty is a particularly potent and intense form of curiosity. It’s a learning signal urging us to keep on paying attention, an emotional reminder that there’s something here worth figuring out. Art hijacks this ancient instinct: If we’re looking at a Rothko, that twinge of beauty in the mOFC is telling us that this painting isn’t just a blob of color; if we’re listening to a Beethoven symphony, the feeling of beauty keeps us fixated on the notes, trying to find the underlying pattern; if we’re reading a poem, a particularly beautiful line slows down our reading, so that we might pause and figure out what the line actually means. Put another way, beauty is a motivational force that helps modulate conscious awareness. The problem beauty solves is the problem of trying to figure out which sensations are worth making sense of and which ones can be easily ignored.

Let’s begin with the neuroscience of curiosity, that weak form of beauty. There’s an interesting recent study from the lab of Colin Camerer at Caltech, led by Min Jeong Kang. (…)

The first thing the scientists discovered is that curiosity obeys an inverted U-shaped curve, so that we’re most curious when we know a little about a subject (our curiosity has been piqued) but not too much (we’re still uncertain about the answer). This supports the information gap theory of curiosity, which was first developed by George Loewenstein of Carnegie-Mellon in the early 90s. According to Loewenstein, curiosity is rather simple: It comes when we feel a gap “between what we know and what we want to know”. This gap has emotional consequences: it feels like a mental itch. We seek out new knowledge because we that’s how we scratch the itch.

The fMRI data nicely extended this information gap model of curiosity. It turns out that, in the moments after the question was first asked, subjects showed a substantial increase in brain activity in three separate areas: the left caudate, the prefrontal cortex and the parahippocampal gyri. The most interesting finding is the activation of the caudate, which seems to sit at the intersection of new knowledge and positive emotions. (For instance, the caudate has been shown to be activated by various kinds of learning that involve feedback, while it’s also been closely linked to various parts of the dopamine reward pathway.) The lesson is that our desire for more information – the cause of curiosity – begins as a dopaminergic craving, rooted in the same primal pathway that responds to sex, drugs and rock and roll.

I see beauty as a form of curiosity that exists in response to sensation, and not just information. It’s what happens when we see something and, even though we can’t explain why, want to see more. But here’s the interesting bit: the hook of beauty, like the hook of curiosity, is a response to an incompleteness. It’s what happens when we sense something missing, when there’s a unresolved gap, when a pattern is almost there, but not quite. I’m thinking here of that wise Leonard Cohen line: “There’s a crack in everything – that’s how the light gets in.” Well, a beautiful thing has been cracked in just the right way.

Beautiful music and the brain

The best way to reveal the link between curiosity and beauty is with music. Why do we perceive certain musical sounds as beautiful? On the one hand, music is a purely abstract art form, devoid of language or explicit ideas. The stories it tells are all subtlety and subtext; there is no content to get curious about. And yet, even though music says little, it still manages to touch us deep, to tittilate some universal dorsal hairs.

We can now begin to understand where these feelings come from, why a mass of vibrating air hurtling through space can trigger such intense perceptions of beauty. Consider this recent paper in Nature Neuroscience by a team of Montreal researchers. (…)

Because the scientists were combining methodologies (PET and fMRI) they were able to obtain a precise portrait of music in the brain. The first thing they discovered (using ligand-based PET) is that beautiful music triggers the release of dopamine in both the dorsal and ventral striatum. This isn’t particularly surprising: these regions have long been associated with the response to pleasurable stimuli. The more interesting finding emerged from a close study of the timing of this response, as the scientists looked to see what was happening in the seconds before the subjects got the chills.
I won’t go into the precise neural correlates – let’s just say that you should thank your right nucleus accumbens the next time you listen to your favorite song – but want to instead focus on an interesting distinction observed in the experiment:

                                                      Click image to enlarge

In essence, the scientists found that our favorite moments in the music – those sublimely beautiful bits that give us the chills – were preceeded by a prolonged increase of activity in the caudate, the same brain area involved in curiosity. They call this the “anticipatory phase,” as we await the arrival of our favorite part:

Immediately before the climax of emotional responses there was evidence for relatively greater dopamine activity in the caudate. This subregion of the striatum is interconnected with sensory, motor and associative regions of the brain and has been typically implicated in learning of stimulus-response associations and in mediating the reinforcing qualities of rewarding stimuli such as food.

In other words, the abstract pitches have become a primal reward cue, the cultural equivalent of a bell that makes us drool. Here is their summary:

The anticipatory phase, set off by temporal cues signaling that a potentially pleasurable auditory sequence is coming, can trigger expectations of euphoric emotional states and create a sense of wanting and reward prediction. This reward is entirely abstract and may involve such factors as suspended expectations and a sense of resolution. Indeed, composers and performers frequently take advantage of such phenomena, and manipulate emotional arousal by violating expectations in certain ways or by delaying the predicted outcome (for example, by inserting unexpected notes or slowing tempo) before the resolution to heighten the motivation for completion.


While music can often seem (at least to the outsider) like an intricate pattern of pitches – it’s art at its most mathematical – it turns out that the most important part of every song or symphony is when the patterns break down, when the sound becomes unpredictable. If the music is too obvious, it is annoyingly boring, like an alarm clock. (Numerous studies, after all, have demonstrated that dopamine neurons quickly adapt to predictable rewards. If we know what’s going to happen next, then we don’t get excited.) This is why composers introduce the tonic note in the beginning of the song and then studiously avoid it until the end. They want to make us curious, to create a beautiful gap between what we hear and what we want to hear.

To demonstrate this psychological principle, the musicologist Leonard Meyer, in his classic book Emotion and Meaning in Music (1956), analyzed the 5th movement of Beethoven’s String Quartet in C-sharp minor, Op. 131. Meyer wanted to show how music is defined by its flirtation with – but not submission to – our expectations of order. To prove his point, Meyer dissected fifty measures of Beethoven’s masterpiece, showing how Beethoven begins with the clear statement of a rhythmic and harmonic pattern and then, in an intricate tonal dance, carefully avoids repeating it. What Beethoven does instead is suggest variations of the pattern. He is its evasive shadow. If E major is the tonic, Beethoven will play incomplete versions of the E major chord, always careful to avoid its straight expression. He wants to preserve an element of uncertainty in his music, making our brains exceedingly curious for the one chord he refuses to give us. Beethoven saves that chord for the end.

According to Meyer, it is the suspenseful tension of music (arising out of our unfulfilled expectations) that is the source of the music’s beauty. While earlier theories of music focused on the way a noise can refer to the real world of images and experiences (its “connotative” meaning), Meyer argued that the emotions we find in music come from the unfolding events of the music itself. This “embodied meaning” arises from the patterns the symphony invokes and then ignores, from the ambiguity it creates inside its own form. “For the human mind,” Meyer writes, “such states of doubt and confusion are abhorrent. When confronted with them, the mind attempts to resolve them into clarity and certainty.” And so we wait, expectantly, for the resolution of E major, for Beethoven’s established pattern to be completed. This nervous anticipation, says Meyer, “is the whole raison d’etre of the passage, for its purpose is precisely to delay the cadence in the tonic.” The uncertainty – that crack in the melody – makes the feeling.

Why the feeling of beauty is useful

What I like about this speculation is that it begins to explain why the feeling of beauty is useful. The aesthetic emotion might have begun as a cognitive signal telling us to keep on looking, because there is a pattern here that we can figure out it. In other words, it’s a sort of a metacognitive hunch, a response to complexity that isn’t incomprehensible. Although we can’t quite decipher this sensation – and it doesn’t matter if the sensation is a painting or a symphony – the beauty keeps us from looking away, tickling those dopaminergic neurons and dorsal hairs. Like curiosity, beauty is a motivational force, an emotional reaction not to the perfect or the complete, but to the imperfect and incomplete. We know just enough to know that we want to know more; there is something here, we just don’t what. That’s why we call it beautiful.”

Jonah Lehrer, American journalist who writes on the topics of psychology, neuroscience, and the relationship between science and the humanities, Why Does Beauty Exist?, Wired science, July 18, 2011

See also:

Beauty is in the medial orbitofrontal cortex of the beholder, study finds
Denis Dutton: A Darwinian theory of beauty, TED, Lapidarium transcript
The Science of Art. A Neurological Theory of Aesthetic Experience
☞ Katherine Harmon, Brain on Beauty Shows the Same Pattern for Art and Music, Scientific American, July 7, 2011


Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’


"In the mid 1970’s, Tim Wilson and Dick Nisbett opened the basement door with their landmark paper entitled “Telling More Than We Can Know,” [pdf] in which they reported a series of experiments showing that people are often unaware of the true causes of their own actions, and that when they are asked to explain those actions, they simply make stuff up. People don’t realize they are making stuff up, of course; they truly believe the stories they are telling about why they did what they did.  But as the experiments showed, people are telling more than they can know. The basement door was opened by experimental evidence, and the unconscious took up permanent residence in the living room. Today, psychological science is rife with research showing the extraordinary power of unconscious mental processes. (…)

At the center of all his work lies a single enigmatic insight: we seem to know less about the worlds inside our heads than about the world our heads are inside.

The Torah asks this question: “Is not a flower a mystery no flower can explain?” Some scholars have said yes, some scholars have said no. Wilson has said, “Let’s go find out.” He has always worn two professional hats — the hat of the psychologist and the hat of the methodologist. He has written extensively about the importance of using experimental methods to solve real world problems, and in his work on the science of psychological change — he uses a scientific flashlight to chase away a whole host of shadows by examining the many ways in which human beings try to change themselves — from self-help to psychotherapy — and asking whether these things really work, and if so, why? His answers will surprise many people and piss off the rest. I predict that this new work will be the center of a very interesting storm.”

Daniel Gilbert, Harvard College Professor of Psychology at Harvard University; Director of Harvard’s Hedonic Psychology Laboratory; Author, Stumbling on Happiness.

It’s not the objective environment that influences people, but their constructs of the world. You have to get inside people’s heads and see the world the way they do. You have to look at the kinds of narratives and stories people tell themselves as to why they’re doing what they’re doing. What can get people into trouble sometimes in their personal lives, or for more societal problems, is that these stories go wrong. People end up with narratives that are dysfunctional in some way.

We know from cognitive behavioral therapy and clinical psychology that one way to change people’s narratives is through fairly intensive psychotherapy. But social psychologists have suggested that, for less severe problems, there are ways to redirect narratives more easily that can have amazingly powerful long-term effects. This is an approach that I’ve come to call story editing. By giving people little prompts, suggestions about the ways they might reframe a situation, or think of it in a slightly different way, we can send them down a narrative path that is much healthier than the one they were on previously. (…)

This little message that maybe it’s not me, it’s the situation I’m in, and that that can change, seemed to alter people’s stories in ways that had dramatic effects down the road. Namely, people who got this message, as compared to a control group that did not, got better grades over the next couple of years and were less likely to drop out of college. Since then, there have been many other demonstrations of this sort that show that little ways of getting people to redirect their narrative from one path down another is a powerful tool to help people live better lives. (…)

Think back to the story editing metaphor: What these writing exercises do is make us address problems that we haven’t been able to make sense of and put us through a sense-making process of reworking it in such a way that we gain a new perspective and find some meaning, so that we basically come up with a better story that allows us to put that problem behind us. This is a great example of a story editing technique that can be quite powerful. (…)

Social psychology is a branch of psychology that began in the 1950s, mostly by immigrants from Germany who were escaping the Nazi regime — Kurt Lewin being the most influential ones. What they had to offer at that time was largely an alternative to behaviorism. Instead of looking at behavior as solely the product of our objective reinforcement environment, Lewin and others said you have to get inside people’s heads and look at the world as they perceive it. These psychologists were very influenced by Gestalt psychologists who were saying the same thing about perception, and they applied this lesson to the way the mind works in general. (…) But to be honest, the field is a little hard to define.  What is social psychology?  Well, the social part is about interactions with other people, and topics such as conformity are active areas of research. (…)

Most economists don’t take the social psychological approach of trying to get inside the heads of people and understanding how they interpret the world. (…)

My dream is that policymakers will become more familiar with this approach and be as likely to call upon a social psychologist as an economist to address social issues. (…)

Another interesting question is the role of evolutionary theory in psychology, and social psychology in particular.  (…)

Evolutionary psychology has become a dominant force in the field. There are many who use it as their primary theoretical perspective, as a way to understand why we do what we do. (…)

There are some striking parallels between psychoanalytic theory and evolutionary theory. Both theories, at some general level are true. Evolutionary theory, of course, shows how the forces of natural selection operated on human beings. Psychoanalytic theory argues that our childhood experiences mold us in certain ways and give us outlooks on the world. Our early relationships with our parents lead to unconscious structures that can be very powerful. (…)

One example where evolutionary psychology led to some interesting testable hypotheses is work by Jon Haidt, my colleague at the University of Virginia. He has developed a theory of moral foundations that says that all human beings endorse the same list of moral values, but that people of different political stripes believe some of these values are more important than others. In other words, liberals may have somewhat different moral foundations than conservatives. Jon has persuasively argued that one reason that political discourse has become so heated and divisive in our country is that there is a lack of understanding in one camp of the moral foundations that the other camp is using to interpret and evaluate the world. If we can increase that understanding, we might lower the heat and improve the dialogue between people on opposite ends of the political spectrum.

Another way in which evolutionary theory has been used is to address questions about the origins of religion. This is not a literature I have followed that closely, to be honest, but there’s obviously a very interesting discourse going on about group selection and the origins and purpose of religion. The only thing I’ll add is, back to what I’ve said before about the importance of having narratives and stories to give people a sense of meaning and purpose, well, religion is obviously one very important source of such narratives. Religion gives us a sense that there is a purpose and a meaning to life, the sense that we are important in the universe, and that our lives aren’t meaningless specks like a piece of sand on a beach. That can be very powerful for our well-being. I don’t think religion is the only way to accomplish that; there are many belief systems that can give us a sense of meaning and purpose other than religion. But religion can fill that void.”

Timothy D. Wilson, is the Sherrell J. Aston Professor of Psychology at the University of Virginia and a researcher of self-knowledge and affective forecasting., The Social Psychological Narrative — or — What Is Social Psychology, Anyway?, Edge, 6 July 2011 (video and full transcript) (Illustration: Hope Kroll, Psychological 3-D narrative)

See also:

Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Iain McGilchrist on The Divided Brain and the Making of the Western World
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
David Deutsch: A new way to explain explanation
Cognition, perception, relativity tag on Lapidarium notes


Steven Pinker on the mind as a system of ‘organs of computation’


I present the mind as a system of “organs of computation” that allowed our ancestors to understand and outsmart objects, animals, plants, and each other. (…)

Most of the assumptions about the mind that underlie current discussions are many decades out of date. Take the hydraulic model of Freud, in which psychic pressure builds up in the mind and can burst out unless it’s channeled into appropriate pathways. That’s just false. The mind doesn’t work by fluid under pressure or by flows of energy; it works by information.

Or, look at the commentaries on human affairs by pundits and social critics. They say we’re “conditioned” to do this, or “brainwashed” to do that, or “socialized” to believe such and such. Where do these ideas come from? From the behaviorism of the 1920’s, from bad cold war movies from the 1950’s, from folklore about the effects of family upbringing that behavior genetics has shown to be false. The basic understanding that the human mind is a remarkably complex processor of information, an “organ of extreme perfection and complication,” to use Darwin’s phrase, has not made it into the mainstream of intellectual life. (…)

I see the mind as an exquisitely engineered device—not literally engineered, of course, but designed by the mimic of engineering that we see in nature, natural selection. That’s what “engineered” animals’ bodies to accomplish improbable feats, like flying and swimming and running, and it is surely what “engineered” the mind to accomplish its improbable feats. (…)

What research in psychology should be: a kind of reverse engineering. When you rummage through an antique store and come across a contraption built of many finely meshing parts, you assume that it was put together for a purpose, and that if you only understood that purpose, you’d have insight as to why it has the parts arranged the way they are. That’s true for the mind as well, though it wasn’t designed by a designer but by natural selection. With that insight you can look at the quirks of the mind and ask how they might have made sense as solutions to some problem our ancestors faced in negotiating the world. That can give you an insight into what the different parts of the mind are doing.

Even the seemingly irrational parts of the mind, like strong passions—jealousy, revenge, infatuation, pride—might very well be good solutions to problems our ancestors faced in dealing with one another. For example, why do people do crazy things like chase down an ex-lover and kill the lover? How could you win someone back by killing them? It seems like a bug in our mental software. But several economists have proposed an alternative. If our mind is put together so that under some circumstances we are compelled to carry out a threat regardless of the costs to us, the threat is made credible. When a person threatens a lover, explicitly or implicitly, by communicating “If you ever leave me I’ll chase you down,” the lover could call his bluff if she didn’t have signs that he was crazy enough to carry it out even though it was pointless. And so the problem of building a credible deterrent into creatures that interact with one another leads to irrational behavior as a rational solution. "Rational," that is, with respect to the "goal" of our genes to maximize the number of copies of themselves. It isn’t "rational," of course, with respect to the goal of whole humans and societies to maximize happiness and fairness. (…)

The paradoxes of happiness

There’s no absolute standard for well-being. A Paleolithic hunter-gatherer should not have fretted that he had no running shoes or central heating or penicillin. How can a brain know whether there is something worth striving for? Well, it can look around and see how well off other people are. If they can achieve something, maybe so can you. Other people anchor your well-being scale and tell you what you can reasonably hope to achieve. (…)

Another paradox of happiness is that losses are felt more keenly than gains. As Jimmy Connors said, “I hate to lose more than I like to win.” You are just a little happy if your salary goes up, but you’re really miserable if your salary goes down by the same amount. That too might be a feature of the mechanism designed to attain the attainable and no more. When we backslide, we keenly feel it because what we once had is a good estimate of what we can attain. But when we improve we have no grounds for knowing that we are as well off as we can hope to be. The evolutionary psychologist Donald Campbell called it “the happiness treadmill." No matter how much you gain in fame, wealth, and so on, you end up at the same level of happiness you began with—though to go down a level is awful. Perhaps it’s because natural selection has programmed our reach to exceed our grasp, but by just a little bit. (…)

The brain as a kind of computer; information processing system

I place myself among those who think that you can’t understand the mind only by looking directly at the brain. Neurons, neurotransmitters, and other hardware features are widely conserved across the animal kingdom, but species have very different cognitive and emotional lives. The difference comes from the ways in which hundreds of millions of neurons are wired together to process information. I see the brain as a kind of computer—not like any commercial computer made of silicon, obviously, but as a device that achieves intelligence for some of the same reasons that a computer achieves intelligence, namely processing of information. (…)

I also believe that the mind is not made of Spam—it has a complex, heterogeneous structure. It is composed of mental organs that are specialized to do different things, like seeing, controlling hands and feet, reasoning, language, social interaction, and social emotions. Just as the body is divided into physical organs, the mind is divided into mental organs.

That puts me in agreement with Chomsky and against many neural network modelers, who hope that a single kind of neural network, if suitably trained, can accomplish every mental feat that we do. For similar reasons I disagree with the dominant position in modern intellectual life—that our thoughts are socially constructed by how we were socialized as children, by media images, by role models, and by conditioning. (…)

Many people lump together the idea that the mind has a complex innate structure with the idea that differences between people have to be innate. But the ideas are completely different. Every normal person on the planet could be innately equipped with an enormous catalog of mental machinery, and all the differences between people—what makes John different from Bill—could come from differences in experience, of upbringing, or of random things that happened to them when they were growing up.

To believe that there’s a rich innate structure common to every member of the species is different from saying the differences between people, or differences between groups, come from differences in innate structure. Here’s an example. Look at number of legs—it’s an innate property of the human species that we have two legs as opposed to six like insects, or eight like spiders, or four like cats—so having two legs is innate. But if you now look at why some people have one leg, and some people have no legs, it’s completely due to the environment—they lost a leg in an accident, or from a disease. So the two questions have to be distinguished. And what’s true of legs is also true of the mind. (…)

Computer technology will never change the world as long as it ignores how the mind works. Why did people instantly start to use fax machines, and continue to use them even though electronic mail makes much more sense? There are millions of people who print out text from their computer onto a piece of paper, feed the paper into a fax machine, forcing the guy at the other end to take the paper out, read it, and crumples it up—or worse, scan it into his computer so that it becomes a file of bytes all over again. This is utterly ridiculous from a technological point of view, but people do it. They do it because the mind evolved to deal with physical objects, and it still likes to conceptualize entities that are owned and transferred among people as physical objects that you can lift and store in a box. Until computer systems, email, video cameras, VCR’s and so on are designed to take advantage of the way the mind conceptualizes reality, namely as physical objects existing at a location and impinged upon by forces, people are going to be baffled by their machines, and the promise of the computer revolution will not be fulfilled. (…)

Q: What is the significance of the Internet and today’s communications revolution for the evolution of the mind?

Probably not much. You’ve got to distinguish two senses of the word “evolution.” The sense used by me, Dawkins, Gould, and other evolutionary biologists refers to the changes in our biological makeup that led us to be the kind of organism we are today. The sense used by most other people refers to continuous improvement or progress. A popular idea is that our biological evolution took us to a certain stage, and our cultural evolution is going to take over—where evolution in both cases is defined as “progress.” I would like us to move away from that idea, because that the processes that selected the genes that built our brains are different form the processes that propelled the rise and fall of empires and the march of technology and.

In terms of strict biological evolution, it’s impossible to know where, if anywhere, our species is going. Natural selection generally takes hundreds of thousands of years to do anything interesting, and we don’t know what our situation will be like in ten thousand or even one thousand years. Also, selection adapts organism to a niche, usually a local environment, and the human species moves all over the place and lurches from life style to life style with dizzying speed on the evolutionary timetable. Revolutions in human life like the agricultural, industrial, and information revolutions occur so quickly that no one can predict whether the change they will have on our makeup, or even whether there will be a change.

The Internet does create a kind of supra-human intelligence, in which everyone on the planet can exchange information rapidly, a bit like the way different parts of a single brain can exchange information. This is not a new process; it’s been happening since we evolved language. Even non-industrial hunter-gatherer tribes pool information by the use of language.

That has given them remarkable local technologies—ways of trapping animals, using poisons, chemically treating plant foods to remove the bitter toxins, and so on. That is also a collective intelligence that comes from accumulating discoveries over generations, and pooling them amongst a group of people living at one time. Everything that’s happened since, such as writing, the printing press, and now the Internet, are ways of magnifying something that our species already knew how to do, which is to pool expertise by communication. Language was the real innovation in our biological evolution; everything since has just made our words travel farther or last longer.”

Steven Pinker, Canadian-American experimental psychologist, cognitive scientist and linguist, Organs of Computation, Edge, January 11, 1997 (Illustration source)

See also:

☞ Steven Pinker, Harvard University Cambridge, MA, So How Does the Mind Work? (pdf), Blackwell Publishing Ltd. 2005