Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Jun
18th
Tue
permalink

Do Geography and Altitude Shape the Sounds of a Language?

imageLanguages that evolve at high elevations are more likely to include a sound that’s easier to make when the air is thinner, new research shows. (Photo: A. Skomorowska)

"[R]ecently, Caleb Everett, a linguist at the University of Miami, made a surprising discovery that suggests the assortment of sounds in human languages is not so random after all.

When Everett analyzed hundreds of different languages from around the world, as part of a study published today in PLOS ONE, he found that those that originally developed at higher elevations are significantly more likely to include ejective consonants. Moreover, he suggests an explanation that, at least intuitively, makes a lot of sense: The lower air pressure present at higher elevations enables speakers to make these ejective sounds with much less effort. (…)

imageThe origin points of each of the languages studied, with black circles representing those with ejective sounds and empty circles those without. The inset plots by latitude and longitude the high-altitude inhabitable regions, where elevations exceed 1500 meters. (1) North American cordillera, (2) Andes, (3) Southern African plateau, (4) East African rift, (5) Caucasus and Javakheti plateau, (6) Tibetan plateau and adjacent regions. Image via PLOS ONE/Caleb Everett

Everett started out by pulling a geographically diverse sampling of 567 languages from the pool of an estimated 6,909 that are currently spoken worldwide. For each language, he used one location that most accurately represented its point of origin, according to the World Atlas of Linguistic Structures. English, for example, was plotted as originating in England, even though it’s spread widely in the years since. But for most of the languages, making this determination is much less difficult than for English, since they’re typically pretty restricted in terms of geographic scope (the average number of speakers of each languageanalyzedis just 7,000).

He then compared the traits of the 475 languages that do not contain ejective consonants with the 92 that do. The ejective languages were clustered in eight geographic groups that roughly corresponded with five regions of high elevation—the North American Cordillera (which include the Cascades and the Sierra Nevadas), the Andes and the Andean altiplano, the southern African plateau, the plateau of the east African rift and the Caucasus range.

When Everett broke things down statistically, he found that 87 percent of the languages with ejectives were located in or near high altitude regions (defined as places with elevations 1500 meters or greater), compared to just 43 precent of the languages without the sound. Of all languages located far from regions with high elevation, just 4 percent contained ejectives. And when he sliced the elevation criteria more finely—rather than just high altitude versus. low altitude—he found that the odds of a given language containing ejectives kept increasing as the elevation of its origin point also increased:

image

Everett’s explanation for this phenomenon is fairly simple: Making ejective sounds requires effort, but slightly less effort when the air is thinner, as is the case at high altitudes. This is because the sound depends upon the speaker compressing a breath of air and releasing it in a sudden burst that accompanies the sound, and compressing air is easier when it’s less dense to begin with. As a result, over the thousands of years and countless random events that shape the evolution of a language, those that developed at high altitudes became gradually more and more likely to incorporate and retain ejectives. Noticeably absent, however, are ejectives in languages that originate close to the Tibetean and Iranian plateaus, a region known colloquially as the roof of the world.

The finding could prompt linguists to look for other geographically-driven trends in the languages spoken around the world. For instance, there might be sounds that are easier to make at lower elevations, or perhaps drier air could make certain sounds trip off the tongue more readily.”

, Do Geography and Altitude Shape the Sounds of a Language?, Smithsonian, June 12, 2013.

See also:

Ejectives, High Altitudes, and Grandiose Linguistic Hypothese, June 17, 2013
☞ Caleb Everett, Evidence for Direct Geographic Influences on Linguistic Sounds: The Case of Ejectives, PLOS ONE 2013

Abstract:

"We present evidence that the geographic context in which a language is spoken may directly impact its phonological form. We examined the geographic coordinates and elevations of 567 language locations represented in a worldwide phonetic database. Languages with phonemic ejective consonants were found to occur closer to inhabitable regions of high elevation, when contrasted to languages without this class of sounds. In addition, the mean and median elevations of the locations of languages with ejectives were found to be comparatively high.

The patterns uncovered surface on all major world landmasses, and are not the result of the influence of particular language families. They reflect a significant and positive worldwide correlation between elevation and the likelihood that a language employs ejective phonemes. In addition to documenting this correlation in detail, we offer two plausible motivations for its existence.

We suggest that ejective sounds might be facilitated at higher elevations due to the associated decrease in ambient air pressure, which reduces the physiological effort required for the compression of air in the pharyngeal cavity–a unique articulatory component of ejective sounds. In addition, we hypothesize that ejective sounds may help to mitigate rates of water vapor loss through exhaled air. These explications demonstrate how a reduction of ambient air density could promote the usage of ejective phonemes in a given language. Our results reveal the direct influence of a geographic factor on the basic sound inventories of human languages.”

Evolution of Language tested with genetic analysis
Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man
Mark Changizi on How To Put Art And Brain Together

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

May
22nd
Tue
permalink

The reinvention of the night. A history of the night in early modern Europe

                        
                                                                  Bridgeman Art Library

"During the previous generation or so, elites across Europe had moved their clocks forward by several hours. No longer a time reserved for sleep, the night time was now the right time for all manner of recreational and representational purposes. This is what Craig Koslofsky calls “nocturnalisation”, defined as “the ongoing expansion of the legitimate social and symbolic uses of the night”, a development to which he awards the status of “a revolution in early modern Europe”. (…)

The shift from street to court and from day to night represented “the sharpest break in the history of celebrations in the West”. (…) By the time of Louis XIV, all the major events – ballets de cour, operas, balls, masquerades, firework displays – took place at night. (…) The kings, courtiers – and those who sought to emulate them – adjusted their daily timetable accordingly. Unlike Steele’s friend, they rose and went to bed later and later. Henry III of France, who was assassinated in 1589, usually had his last meal at 6 pm and was tucked up in bed by 8. Louis XIV’s day began with a lever at 9 and ended (officially) at around midnight. (…)

As with so much else at Versailles, this was a development that served to distance the topmost elite from the rest of the population. Koslofsky speculates that it was driven by the need to find new sources of authority in a confessionally fragmented age.

More directly – and convincingly – authoritarian was the campaign to “colonize” the night by reclaiming it from the previously dominant marginal groups. The most effective instrument was street-lighting, introduced to Paris in 1667. (…)
In 1673, Madame de Sévigné [wrote]: “We found it pleasant to be able to go, after midnight, to the far end of the Faubourg Saint-Germain”. (…)

Street lighting had made life more difficult for criminals, but also for those who believed in ghosts, devils and things that go bump. Addressing an imaginary atheist in a sermon in 1629, John Donne invited him to look ahead just a few hours until midnight: “wake then; and then dark and alone, Hear God and ask thee then, remember that I asked thee now, Is there a God? and if thou darest, say No”. A hundred years later, there were plenty of Europeans prepared to say “No”. In 1729, the Paris police expressed grave anxiety about the spread of irreligion through late-night café discussions of the existence or non-existence of God.”

— Tim Blanning, review of Craig Koslofsky’s "Evening’s Empire. A history of the night in early modern Europe", The reinvention of the night, TLS, Sep 21, 2011.

See also:

☞ Craig Koslofsky, Evening’s Empire — extensive excerpts at Google Books
☞ Benjamin Schwarz, Night Owls, The Atlantic, Apr, 2012

May
5th
Sat
permalink

The Paradox of Contemporary Cultural History. We are clinging as never before to the familiar in matters of style and culture

"For most of the last century, America’s cultural landscape—its fashion, art, music, design, entertainment—changed dramatically every 20 years or so. But these days, even as technological and scientific leaps have continued to revolutionize life, popular style has been stuck on repeat, consuming the past instead of creating the new. (…)

The past is a foreign country. Only 20 years ago the World Wide Web was an obscure academic thingamajig. All personal computers were fancy stand-alone typewriters and calculators that showed only text (but no newspapers or magazines), played no video or music, offered no products to buy. E-mail (a new coinage) and cell phones were still novelties. Personal music players required cassettes or CDs. Nobody had seen a computer-animated feature film or computer-generated scenes with live actors, and DVDs didn’t exist. The human genome hadn’t been decoded, genetically modified food didn’t exist, and functional M.R.I. was a brand-new experimental research technique. Al-Qaeda and Osama bin Laden had never been mentioned in The New York Times. China’s economy was less than one-eighth of its current size. CNN was the only general-interest cable news channel. Moderate Republicans occupied the White House and ran the Senate’s G.O.P. caucus. Since 1992, as the technological miracles and wonders have propagated and the political economy has transformed, the world has become radically and profoundly new. (…)

During these same 20 years, the appearance of the world (computers, TVs, telephones, and music players aside) has changed hardly at all, less than it did during any 20-year period for at least a century. The past is a foreign country, but the recent past—the 00s, the 90s, even a lot of the 80s—looks almost identical to the present. This is the First Great Paradox of Contemporary Cultural History. (…)

Madonna to Gaga

20 years after Hemingway published his war novel For Whom the Bell Tolls a new war novel, Catch-22, made it seem preposterously antique.

Now try to spot the big, obvious, defining differences between 2012 and 1992. Movies and literature and music have never changed less over a 20-year period. Lady Gaga has replaced Madonna, Adele has replaced Mariah Carey—both distinctions without a real difference—and Jay-Z and Wilco are still Jay-Z and Wilco. Except for certain details (no Google searches, no e-mail, no cell phones), ambitious fiction from 20 years ago (Doug Coupland’s Generation X, Neal Stephenson’s Snow Crash, Martin Amis’s Time’s Arrow) is in no way dated, and the sensibility and style of Joan Didion’s books from even 20 years before that seem plausibly circa-2012. (…)

Nostalgic Gaze

Ironically, new technology has reinforced the nostalgic cultural gaze: now that we have instant universal access to every old image and recorded sound, the future has arrived and it’s all about dreaming of the past. Our culture’s primary M.O. now consists of promiscuously and sometimes compulsively reviving and rejiggering old forms. It’s the rare “new” cultural artifact that doesn’t seem a lot like a cover version of something we’ve seen or heard before. Which means the very idea of datedness has lost the power it possessed during most of our lifetimes. (…)

Loss of Appetite

Look through a current fashion or architecture magazine or listen to 10 random new pop songs; if you didn’t already know they were all things from the 2010s, I guarantee you couldn’t tell me with certainty they weren’t from the 2000s or 1990s or 1980s or even earlier. (The first time I heard a Josh Ritter song a few years ago, I actually thought it was Bob Dylan.) In our Been There Done That Mashup Age, nothing is obsolete, and nothing is really new; it’s all good. I feel as if the whole culture is stoned, listening to an LP that’s been skipping for decades, playing the same groove over and over. Nobody has the wit or gumption to stand up and lift the stylus.

Why is this happening? In some large measure, I think, it’s an unconscious collective reaction to all the profound nonstop newness we’re experiencing on the tech and geopolitical and economic fronts. People have a limited capacity to embrace flux and strangeness and dissatisfaction, and right now we’re maxed out. So as the Web and artificially intelligent smartphones and the rise of China and 9/11 and the winners-take-all American economy and the Great Recession disrupt and transform our lives and hopes and dreams, we are clinging as never before to the familiar in matters of style and culture.

If this stylistic freeze is just a respite, a backward-looking counter-reaction to upheaval, then once we finally get accustomed to all the radical newness, things should return to normal—and what we’re wearing and driving and designing and producing right now will look totally démodé come 2032. Or not. Because rather than a temporary cultural glitch, these stagnant last couple of decades may be a secular rather than cyclical trend, the beginning of American civilization’s new chronic condition, a permanent loss of appetite for innovation and the shockingly new. After all, such a sensibility shift has happened again and again over the last several thousand years, that moment when all great cultures—Egyptian, Roman, Mayan, Islamic, French, Ottoman, British—slide irrevocably into an enervated late middle age. (…)

Plus ça change, plus c’est la même chose has always meant that the constant novelty and flux of modern life is all superficial show, that the underlying essences endure unchanged. But now, suddenly, that saying has acquired an alternative and nearly opposite definition: the more certain things change for real (technology, the global political economy), the more other things (style, culture) stay the same.

But wait! It gets still stranger, because even as we’ve fallen into this period of stylistic paralysis and can’t get up, more people than ever before are devoting more of their time and energy to considering and managing matters of personal style.

And why did this happen? In 1984, a few years after “yuppie” was coined, I wrote an article in Time positing that “yuppies are, in a sense, heterosexual gays. Among middle-class people, after all, gays formed the original two-income households and were the original gentrifiers, the original body cultists and dapper health-club devotees, the trendy homemakers, the refined, childless world travelers.” Gays were the lifestyle avant-garde, and the rest of us followed. (…)

Amateur Stylists

People flock by the millions to Apple Stores (1 in 2001, 245 today) not just to buy high-quality devices but to bask and breathe and linger, pilgrims to a grand, hermetic, impeccable temple to style—an uncluttered, glassy, super-sleek style that feels “contemporary” in the sense that Apple stores are like back-on-earth sets for 2001: A Space Odyssey, the early 21st century as it was envisioned in the mid-20th. And many of those young and young-at-heart Apple cultists-cum-customers, having popped in for their regular glimpse and whiff of the high-production-value future, return to their make-believe-old-fashioned lives—brick and brownstone town houses, beer gardens, greenmarkets, local agriculture, flea markets, steampunk, lace-up boots, suspenders, beards, mustaches, artisanal everything, all the neo-19th-century signifiers of state-of-the-art Brooklyn-esque and Portlandish American hipsterism.

Moreover, tens of millions of Americans, the uncool as well as the supercool, have become amateur stylists—scrupulously attending, as never before, to the details and meanings of the design and décor of their homes, their clothes, their appliances, their meals, their hobbies, and more. The things we own are more than ever like props, the clothes we wear like costumes, the places where we live, dine, shop, and vacation like stage sets. And angry right-wingers even dress in 18th-century drag to perform their protests. Meanwhile, why are Republicans unexcited by Mitt Romney? Because he seems so artificial, because right now we all crave authenticity.

The Second Paradox

So, these two prime cultural phenomena, the quarter-century-long freezing of stylistic innovation and the pandemic obsession with style, have happened concurrently—which appears to be a contradiction, the Second Great Paradox of Contemporary Cultural History. Because you’d think that style and other cultural expressions would be most exciting and riveting when they are unmistakably innovating and evolving.

Part of the explanation, as I’ve said, is that, in this thrilling but disconcerting time of technological and other disruptions, people are comforted by a world that at least still looks the way it did in the past. But the other part of the explanation is economic: like any lucrative capitalist sector, our massively scaled-up new style industry naturally seeks stability and predictability. Rapid and radical shifts in taste make it more expensive to do business and can even threaten the existence of an enterprise. One reason automobile styling has changed so little these last two decades is because the industry has been struggling to survive, which made the perpetual big annual styling changes of the Golden Age a reducible business expense. Today, Starbucks doesn’t want to have to renovate its thousands of stores every few years. If blue jeans became unfashionable tomorrow, Old Navy would be in trouble. And so on. Capitalism may depend on perpetual creative destruction, but the last thing anybody wants is their business to be the one creatively destroyed. Now that multi-billion-dollar enterprises have become style businesses and style businesses have become multi-billion-dollar enterprises, a massive damper has been placed on the general impetus for innovation and change.

It’s the economy, stupid. The only thing that has changed fundamentally and dramatically about stylish objects (computerized gadgets aside) during the last 20 years is the same thing that’s changed fundamentally and dramatically about movies and books and music—how they’re produced and distributed, not how they look and feel and sound, not what they are. This democratization of culture and style has two very different but highly complementary results. On the one hand, in a country where an adorably huge majority have always considered themselves “middle class,” practically everyone who can afford it now shops stylishly—at Gap, Target, Ikea, Urban Outfitters, Anthropologie, Barnes & Noble, and Starbucks. Americans: all the same, all kind of cool! And yet, on the other hand, for the first time, anyone anywhere with any arcane cultural taste can now indulge it easily and fully online, clicking themselves deep into whatever curious little niche (punk bossa nova, Nigerian noir cinema, pre-war Hummel figurines) they wish. Americans: quirky, independent individualists!

We seem to have trapped ourselves in a vicious cycle—economic progress and innovation stagnated, except in information technology; which leads us to embrace the past and turn the present into a pleasantly eclectic for-profit museum; which deprives the cultures of innovation of the fuel they need to conjure genuinely new ideas and forms; which deters radical change, reinforcing the economic (and political) stagnation. I’ve been a big believer in historical pendulum swings—American sociopolitical cycles that tend to last, according to historians, about 30 years. So maybe we are coming to the end of this cultural era of the Same Old Same Old. As the baby-boomers who brought about this ice age finally shuffle off, maybe America and the rich world are on the verge of a cascade of the wildly new and insanely great. Or maybe, I worry some days, this is the way that Western civilization declines, not with a bang but with a long, nostalgic whimper.”

Kurt Andersen, American novelist and journalist, to read full essay click You Say You Want a Devolution?, Vanity Fair, Jan 2012 (Illustration by James Taylor)

See also:

Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’
Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Mar
18th
Sun
permalink

Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man

                

"We’re fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. (…)

At the top of the list of things we do that we’re supposed to be doing, and that are at the core of what it is to be human rather than some other sort of animal, are language and music. Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we’re brilliantly capable at absorbing them, and from a young age. That’s how we know we’re meant to be doing them, i.e., how we know we evolved brains for engaging in language and music.

But what if this gets language and music all wrong? What if we’re not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? (…)

I believe that language and music are, indeed, not part of our core—that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved—culturally evolved over millennia—for us. Our brains aren’t shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains.

But how on Earth can one argue for such a view? If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be?

They’d have to possess the auditory structure of…nature. That is, we have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness—to “nature-harness,” as I call it—our brain.

And language and music do nature-harness. (…) The two most important classes of auditory stimuli for humans are (i) events among objects (most commonly solid objects), and (ii) events among humans (i.e., human behavior). And, in my research I have shown that the signature sounds in these two auditory domains drive the sounds we humans use in (i) speech and (ii) music, respectively.

For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book I provide a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book I run through a variety of specific predictions.

Here’s one. If melody is “trying” to sound like the Doppler shifts of a mover—and thereby convey to the auditory system the trajectory of a fictional mover—then a faster mover will have a greater difference between its top and bottom pitch. Does faster music tend to have a wider tessitura? That is, does music with a faster tempo—more beats, or footsteps, per second—tend to have a wider tessitura? Notice that the performer of faster tempo music would ideally like the tessitura to narrow, not widen! But what we found is that, indeed, music having a greater tempo tends to have a wider tessitura, just what one would expect if the meaning of melody is the direction of a mover in your midst.

The preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior!

That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (i.e., of not having their origins via natural selection)…and the giveaway is, ironically, that their shapes are natural (i.e., have the structure of natural auditory events).

We also find this for another core capability that we know we’re not “meant” to do: reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30,000 years), and yet reading has the hallmarks of instinct. As I have argued in my research and in my second book, The Vision Revolution, writing slides so well into our brain because it got shaped by cultural evolution to look “like nature,” and, specifically, to have the signature contour-combinations found in natural scenes (which consists mostly of opaque objects strewn about).

My research suggests that language and music aren’t any more part of our biological identity than reading is. Counterintuitively, then, we aren’t “supposed” to be speaking and listening to music. They aren’t part of our “core” after all.

Or, at least, they aren’t part of the core of Homo sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original Homo sapiens.

So, what is it to be human? Unlike Homo sapiens, we’re grown in a radically different petri dish. Our habitat is filled with cultural artifacts—the two heavyweights being language and music—designed to harness our brains’ ancient capabilities and transform them into new ones.

Humans are more than Homo sapiens. Humans are Homo sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution.

Mark Changizi, an evolutionary neurobiologist, Are We “Meant” to Have Language and Music?, Discover Magazine, March 15th, 2012. (Illustration: Harnessed)

See also:

Mark Changizi, Music Sounds Like Moving People, Science 2.0, Jan 10, 2010.
☞ Mark Changizi, How To Put Art And Brain Together
☞ Mark Changizi, How we read
Mark Changizi on brain’s perception of the world
A brief history of writing, Lapidarium notes
Mark Changizi on Humans, Version 3.0.

Jan
21st
Sat
permalink

'Human beings are learning machines,' says philosopher (nature vs. nurture)

                       

"The point is that in scientific writing (…) suggest a very inflexible view of human nature, that we are determined by our biology. From my perspective the most interesting thing about the human species is our plasticity, our flexibility. (…)

It is striking in general that human beings mistake the cultural for the natural; you see it in many domains. Take moral values. We assume we have moral instincts: we just know that certain things are right and certain things are wrong. When we encounter people whose values differ from ours we think they must be corrupted or in some sense morally deformed. But this is clearly an instance where we mistake our deeply inculcated preferences for natural law. (…)

Q: At what point with morality does biology stop and culture begin?

One important innate contribution to morality is emotions. An aggressive response to an attack is not learned, it is biological. The question is how emotions that are designed to protect each of us as individuals get extended into generalised rules that spread within a group. One factor may be imitation. Human beings are great imitative learners. Rules that spread in a family can be calibrated across a whole village, leading to conformity in the group and a genuine system of morality.

Nativists will say that morality can emerge without instruction. But with innate domains, there isn’t much need for instruction, whereas in the moral domain, instruction is extensive. Kids learn through incessant correction. Between the ages of 2 and 10, parents correct their children’s behaviour every 8 minutes or so of waking life. In due course, our little monsters become little angels, more or less. This gives us reason to think morality is learned.

Q: One of the strongest arguments for innateness comes from linguists such as Noam Chomsky, who argue that humans are born with the basic rules of grammar already in place. But you disagree with them?

Chomsky singularly deserves credit for giving rise to the new cognitive sciences of the mind. He was instrumental in helping us think about the mind as a kind of machine. He has made some very compelling arguments to explain why everybody with an intact brain speaks grammatically even though children are not explicitly taught the rules of grammar.

But over the past 10 years we have started to see powerful evidence that children might learn language statistically, by unconsciously tabulating patterns in the sentences they hear and using these to generalise to new cases. Children might learn language effortlessly not because they possess innate grammatical rules, but because statistical learning is something we all do incessantly and automatically. The brain is designed to pick up on patterns of all kinds.

Q: How hard has it been to put this alternative view on the table, given how Chomskyan thought has dominated the debate in recent years?

Chomsky’s views about language are so deeply ingrained among academics that those who take statistical learning seriously are subject to a kind of ridicule. There is very little tolerance for dissent. This has been somewhat limiting, but there is a new generation of linguists who are taking the alternative very seriously, and it will probably become a very dominant position in the next generation.

Q: You describe yourself as an “unabashed empiricist” who favours nurture over nature. How did you come to this position, given that on many issues the evidence is still not definitive either way?

Actually I think the debate has been settled. You only have to stroll down the street to see that human beings are learning machines. Sure, for any given capacity the debate over biology versus culture will take time to resolve. But if you compare us with other species, our degree of variation is just so extraordinary and so obvious that we know prior to doing any science that human beings are special in this regard, and that a tremendous amount of what we do is as a result of learning. So empiricism should be the default position. The rest is just working out the details of how all this learning takes place.

Q: What are the implications of an empirical understanding of human nature for the way we go about our lives. How should it affect the way we behave?

In general, we need to cultivate a respect for difference. We need to appreciate that people with different values to us are not simply evil or ignorant, and that just like us they are products of socialisation. This should lead to an increase in international understanding and respect. We also need to understand that group differences in performance are not necessarily biologically fixed. For example, when we see women performing less well than men in mathematics, we should not assume that this is because of a difference in biology.

Q: How much has cognitive science contributed to our understanding of what it is to be human, traditionally a philosophical question?

Cognitive science is in the business of settling long-running philosophical debates on human nature, innate knowledge and other issues. The fact that these theories have been churning about for a couple of millennia without any consensus is evidence that philosophical methods are better at posing questions than answering them. Philosophy tells us what is possible, and science tells us what is true.

Cognitive science has transformed philosophy. At the beginning of the 20th century, philosophers changed their methodology quite dramatically by adopting logic. There has been an equally important revolution in 21st-century philosophy in that philosophers are turning to the empirical sciences and to some extent conducting experimental work themselves to settle old questions. As a philosopher, I hardly go a week without conducting an experiment.

My whole working day has changed because of the infusion of science.”

Jesse Prinz is a distinguished professor of philosophy at the City University of New York, specialising in the philosophy of psychology. He is a pioneer in experimental philosophy, using findings from the cognitive sciences, anthropology and other fields to develop empiricist theories of how the mind works. He is the author of The Emotional Construction of Morals (Oxford University Press, 2007), Gut Reactions (OUP, 2004) and Furnishing the Mind (MIT Press, 2002) and Beyond Human Nature: How culture and experience make us who we are, 'Human beings are learning machines,' says philosopher, NewScientist, Jan 20, 2012. (Illustration: Fritz Kahn, British Library)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

Jan
11th
Wed
permalink

Nicholas Carr on Books That Are Never Done Being Written

             

"I recently got a glimpse into the future of books. A few months ago, I dug out a handful of old essays I’d written about innovation, combined them into a single document, and uploaded the file to Amazon’s Kindle Direct Publishing service. Two days later, my little e-book was on sale at Amazon’s site. The whole process couldn’t have been simpler.

Then I got the urge to tweak a couple of sentences in one of the essays. I made the edits on my computer and sent the revised file back to Amazon. The company quickly swapped out the old version for the new one. I felt a little guilty about changing a book after it had been published, knowing that different readers would see different versions of what appeared to be the same edition. But I also knew that the readers would be oblivious to the alterations. (…)

When Johannes Gutenberg invented movable type a half-millennium ago, he also gave us immovable text. Before Gutenberg, books were handwritten by scribes, and no two copies were exactly the same. (…) With the arrival of the letterpress, thousands of identical copies could enter the marketplace simultaneously. The publication of a book, once a nebulous process, became an event.

A new set of literary workers coalesced in publishing houses, collaborating with writers to perfect texts before they went on press. The verb “to finalize” became common in literary circles, expressing the permanence of printed words. (…)

Beyond giving writers a spur to eloquence, what the historian Elizabeth Eisenstein calls “typographical fixity” served as a cultural preservative. It helped to protect original documents from corruption, providing a more solid foundation for the writing of history. It established a reliable record of knowledge, aiding the spread of science. It accelerated the standardization of everything from language to law. The preservative qualities of printed books, Ms. Eisenstein argues, may be the most important legacy of Gutenberg’s invention.

Once digitized, a page of words loses its fixity. It can change every time it’s refreshed on a screen. A book page turns into something like a Web page, able to be revised endlessly after its initial uploading. There’s no technological constraint on perpetual editing, and the cost of altering digital text is basically zero. As electronic books push paper ones aside, movable type seems fated to be replaced by movable text.

That’s an attractive development in many ways. It makes it easy for writers to correct errors and update facts. (…)

Even literary authors will be tempted to keep their works fresh. Historians and biographers will be able to revise their narratives to account for recent events or newly discovered documents. Polemicists will be able to bolster their arguments with new evidence. Novelists will be able to scrub away the little anachronisms that can make even a recently published story feel dated.

But as is often the case with digitization, the boon carries a bane. The ability to alter the contents of a book will be easy to abuse. School boards may come to exert even greater influence over what students read. They’ll be able to edit textbooks that don’t fit with local biases. Authoritarian governments will be able to tweak books to suit their political interests. And the edits can ripple backward. Because e-readers connect to the Internet, the works they contain can be revised remotely, just as software programs are updated today. Movable text makes a lousy preservative.

Such abuses can be prevented through laws and software protocols. What may be more insidious is the pressure to fiddle with books for commercial reasons. Because e-readers gather enormously detailed information on the way people read, publishers may soon be awash in market research. They’ll know how quickly readers progress through different chapters, when they skip pages, and when they abandon a book.

The promise of stronger sales and profits will make it hard to resist tinkering with a book in response to such signals, adding a few choice words here, trimming a chapter there, maybe giving a key character a quick makeover. What will be lost, or at least diminished, is the sense of a book as a finished and complete object, a self-contained work of art.

Not long before he died, John Updike spoke eloquently of a book’s “edges,” the boundaries that give shape and integrity to a literary work and that for centuries have found their outward expression in the indelibility of printed pages. It’s those edges that give a book its solidity, allowing it to stand up to the vagaries of fashion and the erosions of time. And it’s those edges that seem fated to blur as the words of books go from being stamped permanently on sheets of paper to being rendered temporarily on flickering screens.

Nicholas Carr, American writer, Books That Are Never Done Being Written , WSJ.com, Dec 31, 2011. (Illustration: Daniel Baxter)

See also:

What would happen if the printed book had just been invented in a high-tech world in which people had never done their reading from anything but computer screens?

Jan
6th
Fri
permalink

Why Do Languages Die? Urbanization, the state and the rise of nationalism

       

"The history of the world’s languages is largely a story of loss and decline. At around 8000 BC, linguists estimate that upwards of 20,000 languages may have been in existence. Today the number stands at 6,909 and is declining rapidly. By 2100, it is quite realistic to expect that half of these languages will be gone, their last speakers dead, their words perhaps recorded in a dusty archive somewhere, but more likely undocumented entirely. (…)

The problem with globalization in the latter sense is that it is the result, not a cause, of language decline. (…) It is only when the state adopts a trade language as official and, in a fit of linguistic nationalism, foists it upon its citizens, that trade languages become “killer languages.” (…)

Most importantly, what both of the above answers overlook is that speaking a global language or a language of trade does not necessitate the abandonment of one’s mother tongue. The average person on this planet speaks three or four languages. (…)

The truth is, most people don’t “give up” the languages they learn in their youth. (…) To wipe out a language, one has to enter the home and prevent the parents from speaking their native language to their children.

Given such a preposterous scenario, we return to our question — how could this possibly happen?

One good answer is urbanization. If a Gikuyu and a Giryama meet in Nairobi, they won’t likely speak each other’s mother tongue, but they very likely will speak one or both of the trade languages in Kenya — Swahili and English. Their kids may learn a smattering of words in the heritage languages from their parents, but by the third generation any vestiges of those languages in the family will likely be gone. In other cases, extremely rural communities are drawn to the relatively easier lifestyle in cities, until sometimes entire villages are abandoned. Nor is this a recent phenomenon.

The first case of massive language die-off was probably during the Agrarian (Neolithic) Revolution, when humanity first adopted farming, abandoned the nomadic lifestyle, and created permanent settlements. As the size of these communities grew, so did the language they spoke. But throughout most of history, and still in many areas of the world today, 500 or fewer speakers per language has been the norm. Like the people who spoke them, these languages were constantly in flux. No language could grow very large, because the community that spoke it could only grow so large itself before it fragmented. The language followed suit, soon becoming two languages. Permanent settlements changed all this, and soon larger and larger populations could stably speak the same language. (…)

"In primitive times every migration causes not only geographical but also intellectual separation of clans and tribes. Economic exchanges do not yet exist; there is no contact that could work against differentiation and the rise of new customs. The dialect of each tribe becomes more and more different from the one that its ancestors spoke when they were still living together. The splintering of dialects goes on without interruption. The descendants no longer understand one other.… A need for unification in language then arises from two sides. The beginnings of trade make understanding necessary between members of different tribes. But this need is satisfied when individual middlemen in trade achieve the necessary command of language.”

Ludwig von Mises, Nation, State, and Economy (Online edition, 1919; 1983), Ludwig von Mises Institute, p. 46–47.

Thus urbanization is an important factor in language death. To be sure, the wondrous features of cities that draw immigrants — greater economies of scale, decreased search costs, increased division of labor — are all made possible with capitalism, and so in this sense languages may die for economic reasons. But this is precisely the type of language death that shouldn’t concern us (unless you’re a linguist like me), because urbanization is really nothing more than the demonstrated preferences of millions of people who wish to take advantage of all the fantastic benefits that cities have to offer.

In short, these people make the conscious choice to leave an environment where network effects and sociological benefits exist for speaking their native language, and exchange it for a greater range of economic possibilities, but where no such social benefits for speaking the language exist. If this were the only cause of language death — or even just the biggest one — then there would be little more to say about it. (…)

Far too many well-intentioned individuals are too quick to substitute their valuations for those of the last speakers of indigenous languages this way. Were it up to them, these speakers would be resigned to misery and poverty and deprived of participation in the world’s advanced economies in order that their language might be passed on. To be sure, these speakers themselves often fall victim to the mistaken ideology that one language necessarily displaces or interferes with another.

Although the South African Department of Education is trying to develop teaching materials in the local African languages, for example, many parents are pushing back; they want their children taught only in English. In Dominica, the parents go even further and refuse to even speak the local language, Patwa, to their children.[1] Were they made aware of the falsity of this notion of language displacement, perhaps they would be less quick to stop speaking their language to their children. But the decision is ultimately theirs to make, and theirs alone.

Urbanization, however, is not the only cause of language death. There is another that, I’m sad to say, almost none of the linguists who work on endangered languages give much thought to, and that is the state. The state is the only entity capable of reaching into the home and forcibly altering the process of language socialization in an institutionalized way.

How? The traditional method was simply to kill or remove indigenous and minority populations, as was done as recently as 1923 in the United States in the last conflict of the Indian War. More recently this happens through indirect means — whether intentional or otherwise — the primary method of which has been compulsory state schooling.

There is no more pernicious assault on the cultural practices of minority populations than a standardized, Anglified, Englicized compulsory education. It is not just that children are forcibly removed from the socialization process in the home, required to speak an official language and punished (often corporally) for doing otherwise. It is not just that schools redefine success, away from those things valued by the community, and towards those things that make someone a better citizen of the state. No, the most significant impact of compulsory state education is that it ingrains in children the idea that their language and their culture is worthless, of no use in the modern classroom or society, and that it is something that merely serves to set them apart negatively from their peers, as an object of their vicious torment.

But these languages clearly do have value, if for no other reason than simply because people value them. Local and minority languages are valued by their speakers for all sorts of reasons, whether it be for use in the local community, communicating with one’s elders, a sense of heritage, the oral and literary traditions of that language, or something else entirely. Again, the praxeologist is not in a position to evaluate these beliefs. The praxeologist merely notes that free choice in language use and free choice in association, one not dictated by the edicts of the state, will best satisfy the demand of individuals, whether for minority languages or lingua francas. What people find useful, they will use.

By contrast, the state values none of these things. For the state, the goal is to bind individuals to itself, to an imagined homogeneous community of good citizens, rather than their local community. National ties trump local ones in the eyes of the state. Free choice in association is disregarded entirely. And so the state forces many indigenous people to become members of a foreign community, where they are a minority and their language is scorned, as in the case of boarding schools. Whereas at home, mastering the native language is an important part of functioning in the community and earning prestige, and thus something of value, at school it becomes a black mark and a detriment. Given the prisonlike way schools are run, and how they exhibit similar intense (and sometimes dangerous) pressures from one’s peers, minority-language-speaking children would be smart to disassociate themselves as quickly as possible from their cultural heritage.

Mises himself, though sometimes falling prey to common fallacies regarding language like linguistic determinism and ethnolinguistic isomorphism, was aware of this distinction between natural language decline and language death brought on by the state. (…)

This is precisely what the Bureau of Indian Affairs accomplished by coercing indigenous children into attending boarding schools. Those children were cut off from their culture and language — their nation — until they had effectively assimilated American ideologies regarding minority languages, namely, that English is good and all else is bad.

Nor is this the only way the state affects language. The very existence of a modern nation-state, and the ideology it encompasses, is antithetical to linguistic diversity. It is predicated on the idea of one state, one nation, one people. In Nation, State, and Economy, Mises points out that, prior to the rise of nationalism in the 17th and 18th centuries, the concept of a nation did not refer to a political unit like state or country as we think of it today.

A “nation” instead referred to a collection of individuals who share a common history, religion, cultural customs and — most importantly — language. Mises even went so far as to claim that “the essence of nationality lies in language.”[2] The “state” was a thing apart, referring to the nobility or princely state, not a community of people (hence Louis XIV’s famous quip, “L’état c’est moi.”).[3] In that era, a state might consist of many nations, and a nation might subsume many states.

The rise of nationalism changed all this. As Robert Lane Greene points out in his excellent book, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity,

The old blurry linguistic borders became inconvenient for nationalists. To build nations strong enough to win themselves a state, the people of a would-be nation needed to be welded together with a clear sense of community. Speaking a minority dialect or refusing to assimilate to a standard wouldn’t do.[4]

Mises himself elaborated on this point. Despite his belief in the value of a liberal democracy, which would remain with him for the rest of his life, Mises realized early on that the imposition of democracy over multiple nations could only lead to hegemony and assimilation:

In polyglot territories, therefore, the introduction of a democratic constitution does not mean the same thing at all as introduction of democratic autonomy. Majority rule signifies something quite different here than in nationally uniform territories; here, for a part of the people, it is not popular rule but foreign rule. If national minorities oppose democratic arrangements, if, according to circumstances, they prefer princely absolutism, an authoritarian regime, or an oligarchic constitution, they do so because they well know that democracy means the same thing for them as subjugation under the rule of others.[5]

From the ideology of nationalism was also born the principle of irredentism, the policy of incorporating historically or ethnically related peoples into the larger umbrella of a single state, regardless of their linguistic differences. As Greene points out, for example,

By one estimate, just 2 or 3 percent of newly minted “Italians” spoke Italian at home when Italy was unified in the 1860s. Some Italian dialects were as different from one another as modern Italian is from modern Spanish.[6]

This in turn prompted the Italian statesman Massimo D’Agelizo (1798–1866) to say, “We have created Italy. Now we need to create Italians.” And so these Italian languages soon became yet another casualty of the nation-state.

Mises once presciently predicted that,

If [minority nations] do not want to remain politically without influence, then they must adapt their political thinking to that of their environment; they must give up their special national characteristics and their language.[7]

This is largely the story of the world’s languages. It is, as we have seen, the history of the state, a story of nationalistic furor, and of assimilation by force. Only when we abandon this socialist and utopian fantasy of one state, one nation, one people will this story begin to change.”

Danny Hieber is a linguist working to document and revitalize the world’s endangered languages, Why Do Languages Die?, Ludwig von Mises Institute, Jan 04, 2012. (Illustration: The Evolution of the Armenian Alphabet)

[1] Amy L. Paugh, Playing With Languages: Children and Change in a Caribbean Village (2012), Berghahn Books.
[2] Ludwig von Mises, Human Action: A Treatise on Economics (Scholar’s Edition, 2010) Auburn, AL: Ludwig von Mises Institute, p.37.
[3] “I am the state.”
[4] Robert Lane Greene, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity (Kindle Edition, 2011), Delacorte Press, p. 132.
[5] Mises, Nation, State, and Economy, p. 77.
[6] Greene, You Are What You Speak, p. 141.
[7] Mises, Nation, State, and Economy, p. 77.

“Isn’t language loss a good thing, because fewer languages mean easier communication among the world’s people? Perhaps, but it’s a bad thing in other respects. Languages differ in structure and vocabulary, in how they express causation and feelings and personal responsibility, hence in how they shape our thoughts. There’s no single purpose “best” language; instead, different languages are better suited for different purposes.

For instance, it may not have been an accident that Plato and Aristotle wrote in Greek, while Kant wrote in German. The grammatical particles of those two languages, plus their ease in forming compound words, may have helped make them the preeminent languages of western philosophy.

Another example, familiar to all of us who studied Latin, is that highly inflected languages (ones in which word endings suffice to indicate sentence structure) can use variations of word order to convey nuances impossible with English. Our English word order is severely constrained by having to serve as the main clue to sentence structure. If English becomes a world language, that won’t be because English was necessarily the best language for diplomacy.”

— Jared Diamond, American scientist and author, currently Professor of Geography and Physiology at UCLA, The Third Chimpanzee: The Evolution & Future of the Human Animal, Hutchinson Radius, 1991.

See also:

Lists of endangered languages, Wiki
☞ Salikoko S. Mufwene, How Languages Die (pdf), University of Chicago, 2006
☞ K. David Harrison, When Languages Die. The Extinction of the World’s Languages and the Erosion of Human Knowledge (pdf), Oxford University Press, 2007

"It is commonly agreed by linguists and anthropologists that the majority of languages spoken now around the globe will likely disappear within our lifetime. The phenomenon known as language death has started to accelerate as the world has grown smaller. "This extinction of languages, and the knowledge therein, has no parallel in human history. K. David Harrison’s book is the first to focus on the essential question, what is lost when a language dies? What forms of knowledge are embedded in a language’s structure and vocabulary? And how harmful is it to humanity that such knowledge is lost forever?"

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel, Lapidarium notes
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.

Dec
17th
Sat
permalink

Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers


A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity



Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity, TED.com, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Nov
17th
Thu
permalink

Why Man Creates by Saul Bass (1968)

"Whaddaya doin?” ‘I’m painting the ceiling! Whadda you doin?” “I’m painting the floor!” — the exchange between Michaelangelo and da Vinci

Why Man Creates is a 1968 animated short documentary film which discusses the nature of creativity. It was written by Saul Bass and Mayo Simon, and directed by Saul and Elaine Bass.

The movie won the Academy Award for Documentary Short Subject. An abbreviated version of it ran on the first-ever broadcast of CBS’ 60 Minutes, on September 24, 1968.

Why Man Creates focuses on the creative process and the different approaches taken to that process. It is divided into eight sections: The Edifice, Fooling Around, The Process, Judgment, A Parable, Digression, The Search, and The Mark.

In 2002, this film was selected for preservation in the United States National Film Registry by the Library of Congress as being “culturally, historically, or aesthetically significant”.

Summary

The Edifice begins with early humans hunting. They attempt to conquer their prey with stones, but fail, so they begin to use spears and bait. They kill their prey, and it turns into a cave painting, upon which a building begins to be built. Throughout the rest of the section, the camera tracks upward as the edifice grows ever taller.

Early cavemen begin to discover various things such as the lever, the wheel, ladders, agriculture and fire. It then cuts to clips of early societies and civilizations. It depicts the appearance of the first religions and the advent of organized labor. It then cuts to the Great Pyramids at Giza and depicts the creation of writing.

Soon an army begins to move across the screen chanting “BRONZE,” but they are overrun by an army chanting “IRON”. The screen then depicts early cities and civilizations.

This is followed by a black screen with one man in traditional Greek clothing who states, “All was in chaos ‘til Euclid arose and made order.” Next, various Greek achievements in mathematics are depicted as they build Greek columns around which Greeks discuss items, including, “What is the good life and how do you lead it?” “Who shall rule the state?” “The Philosopher King.” “The Aristocrat.” “The People.” “You mean ALL the people?” “What is the nature of the Good? What is the nature of Justice?” “What is Happiness?”

The culture of ancient Greece fades into the armies of Rome. The organized armies surround the great Roman architecture as they chant “Hail Caesar!” A man at a podium states, “Roman Law is now in session!”, and when he bangs his gavel, the architecture collapses. Dark soldiers begin to pop up from the rubble and eventually cover the whole screen with darkness symbolizing the Dark Ages.

The Dark Ages consist of inaudible whisperings and mumblings. At one point, a light clicks on and an Arab mathematician says, “Allah be praised! I’ve discovered the zero.” at which point his colleague responds, “What?” and he says “Nothing, nothing.” Next come cloistered monks who sing, “What is the shape of the Earth? Flat. What happens when you get to the edge? You fall off. Does the earth move? Never.”

Finally the scene brightens and shows a stained glass window. Various scientists open stained glass doors and say things such as, “The Earth moves!” “The Earth is round!” “The blood circulates!” “There are worlds smaller than ours!” “There are worlds larger than ours!” Each time one opens a door, a large, hairy arm slams the door shut. Finally, the stained glass breaks in the wake of the new Enlightenment.

Next, Michelangelo and da Vinci are depicted. The steam engine is invented, and gears and belts begin to cover everything. The light bulb and steam locomotive are created. Darwin is referred to as two men hit each other with their canes arguing whether man is an animal. The telegraph is invented and psychology created. Next, a small creature hops across the screen saying, “I’m a bug, I’m a germ, I’m a bug, I’m a germ… [indrawn breath] Louis Pasteur! I’m not a bug, I’m not a germ…” Great musicians such as Beethoven are depicted. Alfred Nobel invents dynamite.

Next, the cartooning shows the great speeches and documents on government and society from the American Revolution onward with quotes such as “All men are created equal…”, “Life, liberty and the pursuit of happiness”, “And the Government, by the people,…”, etc. and ends with “One World.”

Finally, the building stops and the Wright Brothers' plane lands on top of it. It is quickly covered in more advanced planes, in cars, in televisions, and finally in early computers. At the top is a radioactive atom which envelops a man in smoke. The Edifice ends with that man yelling, “HELP!”

Fooling Around displays a random series of perspectives and the creative ideas which come from them.

The Process displays a man who is making artwork from a series of geometrical figures. Each time he attempts to keep them in place, they move and rearrange themselves. He tries many different approaches to the problem. Finally he accepts a working configuration and calls his wife to look at it. She says, “All it needs is an American flag.”

Judgment is a series of reactions, presumably to the creation from The Process. It displays their criticisms of it, such as “It represents the decline of Western culture…”, and only a very few support it.

A Parable begins at a ping-pong ball factory. Each ball is made in exactly the same way, and machines test them to get rid of anomalies. As the balls are being tested for their bounce levels, one bounces much higher than the rest. It is placed in a chute which leads to a garbage can outside the factory. It proceeds to bounce across town to a park, where it begins to bounce. Quickly, a cluster of ping-pong balls gather around it. It keeps bouncing higher and higher, until it doesn’t come back. It concludes with the comment:
“There are some who say he’s coming back and we have only to wait …
There are some who say he burst up there because ball was not meant to fly …
And there are some who maintain he landed safely in a place where balls bounce high …”

Digression is a very short section in which one snail says to another, “Have you ever thought that radical ideas threaten institutions, then become institutions, and in turn reject radical ideas which threaten institutions?” to which the other snail replies “No.” and the first says dejectedly, “Gee, for a minute I thought I had something.”

The Search shows scientists who have been working for years on projects such as solving world hunger, developing a cure for Cancer, or questioning the origin of the universe. Then it showed a scientist who had worked on a project for 20 years, and it simply didn’t work out. He was asked what he would do with himself, and he replied that he didn’t know. (Note: each of the scientists shown was working on something which still has not been solved to date, even though each one expected solid results in only a few years. This forwards the concept shown in this session far better than the creators could have known in 1968.)

The Mark asks the question, Why does man create? and determines that man creates to simply state, “I Am.” The film ends by displaying “I Am” written in paint on the side of a building.” — (Wiki)

Nov
12th
Sat
permalink

Non-Western Philosophy. The Ladder, the Museum, and the Web

           

"In philosophy today, (…) though everyone officially abjures the ladder model of human cultures, it continues to determine much of our reasoning about what counts as philosophy and what does not.

It is worth pointing out that all societies that have produced anything that we are able to easily recognize as philosophy are ladder societies. We might in fact argue, if not here, that philosophy as a discrete domain of activity in a society is itself a side-effect of inequality. The overwhelming authority of the church in medieval Europe, the caste system in ancient India, the control of intellectual life by the mandarin class in ancient China (meritocratically produced by the Confucian examination system, but still elite) present themselves as three compelling examples of the sort of social nexus that has left us with significant philosophical works. (…)

Imagine, for comparison, an archaeologist who has spent a career working on Bronze Age Scandinavia, and then switches to the Mayan or the Indus Valley civilization. Would anyone think to suggest that this scientist is moving from a myopic Eurocentrism to an appreciation for minority cultures and their achievements? Of course not! The archaeologist studies human material culture on the presumption that, within certain parameters, human beings may be found to do more or less the same sorts of thing wherever they reside and whatever phenotype they may have, and moreover that wherever they are found, human cultures have always been linked in complicated, constitutive ways to other cultures, so that in fact the process of ‘globalization’ is coeval with the earliest out-of-Africa migrations. (…)

It seems to me that the progress of the study of the history of material culture might serve as a model for the study of the history of intellectual culture, which in certain times and places has been written down and distilled into what we are able to recognize as ‘philosophy’.

And here we come to the third possible model for thinking about non-Western philosophy: beyond the ladder and the museum, there is the web. This is the same web that has always linked the material cultures of at least Eurasia to one another, whatever distinctive regional flavors might also be discerned. The possibility of approaching the history of intellectual culture in the same way seems particularly auspicious right now, given the recent, very promising results of the so-called cognitive turn in the study of material culture, that is, the turn to the study of cultural artifacts as traces of distributed or exosomatic cognition, as material and intentional at once. So material-cultural history already is intellectual history of a sort, even if it is not the kind that interests philosophers: there is a great gap between stone tools and, say, medieval logic treatises, and different skills are required for studying the one than for the other. But both are material traces of human intention, and both emerge out of particular kinds of societies only. To know them fully is to know what kind of societies are able to produce them.

When we accept this final point – surely the most heterodox, from the point of view of most philosophers– we are for the first time in a position to study and to teach Indian, Chinese, European, and Arabic philosophy alongside one another in a serious and adequate way. When we accept, for example, that all of the great Axial Age civilizations, to use Karl Jaspers’s helpful label, are the product of a single suite of broad historical changes that swept the Eurasian continent, and thus that Chinese, Indian, and Greek thought-worlds are not aboriginal in any meaningful sense (neither are Cree or Huron or Inuit, for that matter, but this can be dealt with another time), then all of a sudden it becomes possible to study, say, the Buddha and his followers not as an expression of some absolutely other Eastern ‘wisdom’, but instead as a local expression of global developments, or as a node in a web. (…)

What makes it so hard to see that this might be the proper approach to the study of the history of philosophy as a global phenomenon is that philosophy is not supposed to work in the same way as folk beliefs. It is supposed to be a pursuit of culture-independent truth. Yet this article of faith has had the awkward and unintended consequence of making the available defenses of the de-Eurocentrization of philosophy –something most in the field hold to be desirable for political reasons– quaint at best and incoherent at worst. If philosophy is independent of culture, then we cannot go, so to speak, underneath the philosophy and examine the broader social dynamics that sustain it. But we need to look at these dynamics in order to see the connections between one tradition and another.

There are, so to speak, tunnels in the basement between India and Greece, but we’re afraid to go down there. And so the result is that we are not so much liberating philosophy from culture, as we are making each culture’s philosophy irreducibly and incomparably its own, just as if it were a matter of displaying folk costumes in some Soviet ethnographic museum, or in the opening ceremonies of the Olympics. This is unscientific, unrigorous, and unacceptable in any other academic discipline.”

Justin E. H. Smith, Ph.D. in Philosophy at Columbia University, he teaches philosophy at Concordia University in Montreal, and is currently a member of the School of Historical Studies at the Institute for Advanced Study in Princeton.

To read full essay click What Is ‘Non-Western’ Philosophy?, Berfrois, Nov 10, 2011.

Oct
25th
Tue
permalink

Iain McGilchrist on The Divided Brain and the Making of the Western World

                             

"Just as the human body represents a whole museum of organs, with a long evolutionary history behind them, so we should expect the mind to be organized in a similar way. (…) We receive along with our body a highly differentiated brain which brings with it its entire history, and when it becomes creative it creates out of this history – out of the history of mankind (…) that age-old natural history which has been transmitted in living form since the remotest times, namely the history of the brain structure."

Carl Jung cited in The Master and His Emissary, Yale University Press, 2009, p.8.

Renowned psychiatrist and writer Iain McGilchrist explains how the ‘divided brain’ has profoundly altered human behaviour, culture and society. He draws on a vast body of recent experimental brain research to reveal that the differences between the brain’s two hemispheres are profound.

The left hemisphere is detail-oriented, prefers mechanisms to living things, and is inclined to self-interest. It misunderstands whatever is not explicit, lacks empathy and is unreasonably certain of itself, whereas the right hemisphere has greater breadth, flexibility and generosity, but lacks certainty.

It is vital that the two hemispheres work together, but McGilchrist argues that the left hemisphere is increasingly taking precedence in the modern world, resulting in a society where a rigid and bureaucratic obsession with structure and self-interest hold sway.

RSA, 17th Nov 2010

Iain McGilchrist points out that the idea that “reason [is] in the left hemisphere and something like creativity and emotion [are] in the right hemisphere” is an unhelpful misconception. He states that “every single brain function is carried out by both hemispheres. Reason and emotion and imagination depend on the coming together of what both hemispheres contribute.” Nevertheless he does see an obvious dichotomy, and asks himself: “if the brain is all about making connections, why is it that it’s evolved with this whopping divide down the middle?”

Natasha Mitchell, "The Master and his Emissary: the divided brain and the reshaping of Western civilisation", 19 June 2010

      

"The author holds instead that each of the hemispheres of the brain has a different “take” on the world or produces a different “version” of the world, though under normal circumstances these work together. This, he says, is basically to do with attention. He illustrates this with the case of chicks which use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, “The left hemisphere has its own agenda, to manipulate and use the world”; its world view is essentially that of a mechanism. The right has a broader outlook, “has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values.”

Staff, "Two worlds of the left and right brain (audio podcast)", BBC Radio 4, 14 November 2009

McGilchrist explains this more fully in a later interview for ABC Radio National’s All in the Mind programme, stating: “The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways—-in order to be able to use what it understands of the world and to be able to manipulate the world—-it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain” [the left hemisphere]. Though he sees this as an essential “double act”, McGilchrist points to the problem that the left hemisphere has a “narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful” and to the problem of the left hemisphere’s lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.

How the brain has shaped our world

"The author describes the evolution of Western culture, as influenced by hemispheric brain functioning, from the ancient world, through the Renaissance and Reformation; the Enlightenment; Romanticism and Industrial Revolution; to the modern and postmodern worlds which, to our detriment, are becoming increasingly dominated by the left brain. According to McGilchrist, interviewed for ABC Radio National’s All in the Mind programme, rather than seeking to explain the social and cultural changes and structure of civilisation in terms of the brain — which would be reductionist — he is pointing to a wider, more inclusive perspective and greater reality in which there are two competing ways of thinking and being, and that in modern Western society we appear increasingly to be able to only entertain one viewpoint: that of the left hemisphere.

The author argues that the brain and the mind do not simply experience the world, but that the world we experience is a product or meeting of that which is outside us with our mind. The outcome, the nature of this world, is thus dependent upon “which mode of attention we bring to bear on the world

McGilchrist sees an occasional flowering of "the best of the right hemisphere and the best of the left hemisphere working together" in our history: as witnessed in Athens in the 6th century by activity in the humanities and in science and in ancient Rome during the Augustan era. However, he also sees that as time passes, the left hemisphere once again comes to dominate affairs and things slide back into “a more theoretical and conceptualised abstracted bureaucratic sort of view of the world. According to McGilchrist, the cooperative use of both left and right hemispheres diminished and became imbalanced in favour of the left in the time of the classical Greek philosophers Parmenides and Plato and in the late classical Roman era. This cooperation and openness were regained during the Renaissance 1,000 years later which brought “sudden efflorescence of creative life in the sciences and the arts”. However, with the Reformation, the early Enlightenment, and what has followed as rationalism has arisen, our world has once again become increasingly rigid, simplified and rule-bound.

Looking at more recent Western history, McGilchrist sees in the Industrial Revolution that for the first time artefacts were being made “very much to the way the left hemisphere sees the world — simple solids that are regular, repeated, not individual in the way that things that are made by hand are” and that a transformation of the environment in a similar vein followed on from that; that what was perceived inwardly was projected outwardly on a mass scale. The author argues that the scientific materialism which developed in the 19th century is still with us, at least in the biological sciences, though he sees physics as having moved on.

McGilchrist does not see modernism and postmodernism as being in opposition to this, but also “symptomatic of a shift towards the left hemisphere’s conception of the world”, taking the idea that there is no absolute truth and turning that into “there is no truth at all”, and he finds some of the movements’ works of art “symptomatic of people whose right hemisphere is not working very well.” McGilchrist cites the American psychologist Louis Sass, author of Madness and Modernism, pointing out that Sass “draws extensive parallels between the phenomena of modernism and postmodernism and of schizophrenia”, with things taken out of context and fragmented.”

The Master and His Emissary, Wiki

The Master and His Emissary

Whatever the relationship between consciousness and the brainunless the brain plays no role in bringing the world as we experience it into being, a position that must have few adherents – its structure has to be significant. It might even give us clues to understanding the structure of the world it mediates, the world we know. So, to ask a very simple question, why is the brain so clearly and profoundly divided? Why, for that matter, are the two cerebral hemispheres asymmetrical? Do they really differ in any important sense? If so, in what way? (…)

Enthusiasm for finding the key to hemisphere differences has waned, and it is no longer respectable for a neuroscientist to hypothesise on the subject. (…)

These beliefs could, without much violence to the facts, be characterised as versions of the idea that the left hemisphere is somehow gritty, rational, realistic but dull, and the right hemisphere airy-fairy and impressionistic, but creative and exciting; a formulation reminiscent of Sellar and Yeatman’s immortal distinction (in their parody of English history teaching, 1066 and All That) between the Roundheads – ‘Right and Repulsive’ – and the Cavaliers – ‘Wrong but Wromantic’. In reality, both hemispheres are crucially involved in reason, just as they are in language; both hemispheres play their part in creativity. Perhaps the most absurd of these popular misconceptions is that the left hemisphere, hard-nosed and logical, is somehow male, and the right hemisphere, dreamy and sensitive, is somehow female. (…)

V. S. Ramachandran, another well-known and highly regarded neuroscientist, accepts that the issue of hemisphere difference has been traduced, but concludes: ‘The existence of such a pop culture shouldn’t cloud the main issue – the notion that the two hemispheres may indeed be specialised for different functions. (…)

I believe there is, literally, a world of difference between the hemispheres. Understanding quite what that is has involved a journey through many apparently unrelated areas: not just neurology and psychology, but philosophy, literature and the arts, and even, to some extent, archaeology and anthropology. (…)

I have come to believe that the cerebral hemispheres differ in ways that have meaning. There is a plethora of well-substantiated findings that indicate that there are consistent differences – neuropsychological, anatomical, physiological and chemical, amongst others – between the hemispheres. But when I talk of ‘meaning’, it is not just that I believe there to be a coherent pattern to these differences. That is a necessary first step. I would go further, however, and suggest that such a coherent pattern of differences helps to explain aspects of human experience, and therefore means something in terms of our lives, and even helps explain the trajectory of our common lives in the Western world.

My thesis is that for us as human beings there are two fundamentally opposed realities, two different modes of experience; that each is of ultimate importance in bringing about the recognisably human world; and that their difference is rooted in the bihemispheric structure of the brain. It follows that the hemispheres need to co-operate, but I believe they are in fact involved in a sort of power struggle, and that this explains many aspects of contemporary Western culture. (…)

The brain has evolved, like the body in which it sits, and is in the process of evolving. But the evolution of the brain is different from the evolution of the body. In the brain, unlike in most other human organs, later developments do not so much replace earlier ones as add to, and build on top of, them. Thus the cortex, the outer shell that mediates most so-called higher functions of the brain, and certainly those of which we are conscious, arose out of the underlying subcortical structures which are concerned with biological regulation at an unconscious level; and the frontal lobes, the most recently evolved part of the neocortex, which occupy a much bigger part of the brain in humans than in our animal relatives, and which grow forwards from and ‘on top of ’ the rest of the cortex, mediate most of the sophisticated activities that mark us out as human – planning, decision making, perspective taking, self-control, and so on. In other words, the structure of the brain reflects its history: as an evolving dynamic system, in which one part evolves out of, and in response to, another. (…)

There is after all coherence to the way in which the correlates of our experience are grouped and organised in the brain, and we can see these ‘functions’ forming intelligible wholes, corresponding to areas of experience, and see how they relate to one another at the brain level, this casts some light on the structure and experience of our mental world. In this sense the brain is – in fact it has to be – a metaphor of the world. (…)

I believe that there are two fundamentally opposed realities rooted in the bihemispheric structure of the brain. But the relationship between them is no more symmetrical than that of the chambers of the heart – in fact, less so; more like that of the artist to the critic, or a king to his counsellor.

There is a story in Nietzsche that goes something like this. There was once a wise spiritual master, who was the ruler of a small but prosperous domain, and who was known for his selfless devotion to his people. As his people flourished and grew in number, the bounds of this small domain spread; and with it the need to trust implicitly the emissaries he sent to ensure the safety of its ever more distant parts. It was not just that it was impossible for him personally to order all that needed to be dealt with: as he wisely saw, he needed to keep his distance from, and remain ignorant of, such concerns. And so he nurtured and trained carefully his emissaries, in order that they could be trusted. Eventually, however, his cleverest and most ambitious vizier, the one he most trusted to do his work, began to see himself as the master, and used his position to advance his own wealth and influence. He saw his master’s temperance and forbearance as weakness, not wisdom, and on his missions on the master’s behalf, adopted his mantle as his own – the emissary became contemptuous of his master. And so it came about that the master was usurped, the people were duped, the domain became a tyranny; and eventually it collapsed in ruins.

The meaning of this story is as old as humanity, and resonates far from the sphere of political history. I believe, in fact, that it helps us understand something taking place inside ourselves, inside our very brains, and played out in the cultural history of the West, particularly over the last 500 years or so. (…)

I hold that, like the Master and his emissary in the story, though the cerebral hemispheres should co-operate, they have for some time been in a state of conflict. The subsequent battles between them are recorded in the history of philosophy, and played out in the seismic shifts that characterise the history of Western culture. At present the domain – our civilisation – finds itself in the hands of the vizier, who, however gifted, is effectively an ambitious regional bureaucrat with his own interests at heart. Meanwhile the Master, the one whose wisdom gave the people peace and security, is led away in chains. The Master is betrayed by his emissary.”

Iain McGilchrist, psychiatrist and writer, The Master and His Emissary, Yale University Press, 2009 Illustrations: 1), 2) Shalmor Avnon Amichay/Y&R Interactive

Iain McGilchrist: The Divided Brain | RSA animated

RSA, 17th Nov 2010

See also:

☞ Iain McGilchrist, The Battle Between the Brain’s Left and Right Hemispheres, WSJ.com, Jan 2, 2010
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Mind and Brain tag on Lapidarium notes

Sep
20th
Tue
permalink

Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

In this part of Peter Joseph's documentary Zeitgeist: Moving Forward “The discussion turns to human behavior and the nature vs. nurture debate. This portion begins with a small clip with Robert Sapolsky summing up the nature vs. nurture debate which he essentially refers it as a “false dichotomy.” After which he states that “it is virtually impossible to understand how biology works outside the context of environment.”

During which time the film then goes onto describe that it is neither Nature or Nurture that shapes human behavior but both are supposed to influence behavior. The interviewed pundits state that even with genetic predispositions to diseases the expression and manifestation of disease is largely determined by environmental stressors. Disease criminal activity and addictions are also placed in the same light. One study discussed showed that newly born babies are more likely to die if they are not touched. Another study which was mentioned claimed to show how stressed women were more likely to have children with addiction disorders. A reference is made to the unborn children who were in utero during the Dutch famine of 1944. The “Dutch Famine Birth Cohort Study" is mentioned to have shown that obesity and other health complications became common problems later in life due to prolonged starvation of their mother during pregnancy.

Comparisons are made by sociologists of criminals in different parts of the world and how different cultures with different values can often have more peaceful inhabitants. An Anabaptist sect called the Hutterites are mentioned to have never reported a homicide in any of their societies. The overall conclusion is that social environment and cultural conditioning play a large part in shaping human behavior.”

Zeitgeist Moving Forward I Human Nature

Dr. Gabor Maté: “Nothing is genetically programmed. There are very rare diseases, a small handful, extremely sparsely represented in the population, that are truly genetically determined. Most complex conditions might have a predisposition that has a genetic component. But a predisposition is not the same as a predetermination. The whole search for the source of diseases in the genome was doomed to failure before anybody even thought of it, because most diseases are not genetically predetermined. Heart disease, cancer, strokes, rheumatoid conditions, autoimmune conditions in general, mental health conditions, addictions, none of them are genetically determined. (…)

That’s an epigenetic effect. “Epi” means on top of, so that the epigenetic influence is what happens environmentally to either activate or deactivate certain genes. (…)

So, the genetic argument is simply a cop-out which allows us to ignore the social and economic and political factors that, in fact, underlie many troublesome behaviors. (…)

If we wish to understand what then makes some people susceptible we actually have to look at the life experience. The old idea, although it’s old but it’s still broadly held, that addictions are due to some genetic cause is simply scientifically untenable. What the case is actually is that certain life experiences make people susceptible. Life experiences that not only shape the person’s personality and psychological needs but also their very brains in certain ways. And that process begins in utero.

It has been shown, for example that if you stress mothers during pregnancy their children are more likely to have traits that predispose them to addictions and that’s because development is shaped by the psychological and social environment. So the biology of human beings is very much affected by and programmed by the life experiences beginning in utero.”

Dr. Robert Sapolsky: “Environment does not begin at birth. Environment begins as soon as you have an environment. As soon as you are a fetus, you are subject to whatever information is coming through mom’s circulations. Hormones levels of nutrients. (…) Be a Dutch Hunger Winter fetus and half a century later, everything else being equal, you are more likely to have high blood pressure, obesity or metabolic syndrome. That is environment coming in a very unexpected place. (…)”

GM: “The point about human development and specifically human brain development is that it occurs mostly under the impact of the environment and mostly after birth. (…)

The concept of Neural Darwinism simply means that the circuits that get the appropriate input from the environment will develop optimally and the ones that don’t will either not develop optimally or perhaps not at all. (…)

There is a significant way in which early experiences shape adult behavior and even and especially early experiences for which there is no recall memory. It turns out that there are two kinds of memory: there is explicit memory which is recall; this is when you can call back facts, details, episodes, circumstances. But the structure in the brain which is called the hippocampus which encodes recall memory doesn’t even begin to develop fully until a year and a half and it is not fully developed until much later, which is why hardly anybody has any recall memory prior to 18 months.

But there is another kind of memory which is called implicit memory which is, in fact, an emotional memory where the emotional impact and the interpretation the child makes of those emotional experiences are ingrained in the brain in the form of nerve circuits ready to fire without specific recall.  So to give you a clear example, people who are adopted have a lifelong sense of rejection very often. They can’t recall the adoption. They can’t recall the separation of the birth mother because there’s nothing there to recall with. But the emotional memory of separation and rejection is deeply embedded in their brains. Hence, they are much more likely to experience a sense of rejection and a great emotional upset when they perceive themselves as being rejected by other people. That’s not unique to people who are adopted but it is particularly strong in them because of this function of implicit memory. (…)

The great British child psychiatrist, D.W. Winnicott, said that fundamentally, two things can go wrong in childhood. One is when things happen that shouldn’t happen and then things that should happen but don’t. (…)

The Buddha argued that everything depends on everything else. He says ‘The one contains the many and the many contains the one.’ That you can’t understand anything in isolation from its environment, the leaf contains the sun, the sky and the earth, obviously. This has now been shown to be true, of course all around and specifically when it comes to human development. The modern scientific term for it is the ‘bio-psycho-social’ nature of human development which says that the biology of human beings depends very much on their interaction with the social and psychological environment.

And specifically, the psychiatrist and researcher Daniel Siegel at the University of California, Los Angeles, UCLA has coined a phrase Interpersonal Neurobiology” which means to say that the way that our nervous system functions depends very much on our personal relationships,  in the first place with the parenting caregivers, and in the second place with other important attachment figures in our lives and in the third-place, with our entire culture. So that you can’t separate the neurological functioning of a human being from the environment in which he or she grew up in and continues to exist in. And this is true throughout the life cycle. It’s particularly true when you are dependent and helpless when your brain is developing but it’s true even in adults and even at the end of life. (…)”

Dr. James Gilligan: “Violence is not universal. It is not symmetrically distributed throughout the human race. There is a huge variation in the amount of violence in different societies. There are some societies that have virtually no violence. There are others that destroy themselves. Some of the Anabaptist religious groups that are complete strict pacifists like the Amish, the Mennonites, the Hutterites, among some of these groups, the Hutterites - there are no recorded cases of homicide.

During our major wars, like World War II where people were being drafted they would refuse to serve in the military. They would go to prison rather than serve in the military. In the Kibbutzim in Israel the level of violence is so low that the criminal courts there will often send violent offenders - people who have committed crimes - to live on the Kibbutzim in order to learn how to live a non-violent life. Because that’s the way people live there. 

RS: So, we are amply shaped by society. Our societies, in the broader sense, including our theological, our metaphysical, our linguistic influences, etc, our societies help shape us as to whether or not we think life is basically about sin or about beauty; whether the afterlife will carry a price for how we live our lives or if it’s irrelevant. (…)

So, this brings us to a total impossible juncture which is to try to make sense in perspective science as to what that nature is of human nature. You know, on a certain level the nature of our nature is not to be particularly constrained by our nature. We come up with more social variability than any species out there. More systems of belief, of styles, of family structures, of ways of raising children. The capacity for variety that we have is extraordinary. (…)

GM: In a society which is predicated on competition and really, very often, the ruthless exploitation of one human being by another, the profiteering off of other people’s problems and very often the creation of problems for the purpose of profiteering, the ruling ideology will very often justify that behavior by appeals to some fundamental and unalterable human nature. So the myth in our society is that people are competitive by nature and that they are individualistic and that they’re selfish. The real reality is quite the opposite. We have certain human needs. The only way that you can talk about human nature concretely is by recognizing that there are certain human needs. We have a human need for companionship and for close contact, to be loved, to be attached to, to be accepted, to be seen, to be received for who we are. If those needs are met, we develop into people who are compassionate and cooperative and who have empathy for other people.

So the opposite, that we often see in our society, is in fact, a distortion of human nature precisely because so few people have their needs met. So, yes, you can talk about human nature but only in the sense of basic human needs that are instinctively evoked or I should say certain human needs that lead to certain traits if they are met and a different set of traits if they are denied.”

— Zeitgeist: Moving Forward - full transcript

Robert Sapolsky - American scientist and author. He is currently professor of Biological Sciences, and Professor of Neurology and Neurological Sciences and, by courtesy, Neurosurgery, at Stanford University.

Gabor Maté, Hungarian-born Canadian physician who specializes in the study and treatment of addiction and is also widely recognized for his unique perspective on Attention Deficit Disorder.

Richard Wilkinson - British researcher in social inequalities in health and the social determinants of health. He is Professor Emeritus of social epidemiology at the University of Nottingham.

James Gilligan - American psychiatrist and author, best known for his series of books entitled Violence, where he draws on 25 years of work in the American prison system to describe the motivation and causes behind violent behaviour. He now lectures at the Department of Psychiatry, New York University.

See also:

Zeitgeist: Moving Forward by Peter Joseph, 2011 (full documentary) (transcript)

Sep
9th
Fri
permalink

Zygmunt Bauman: Europe’s task consists of passing on to all the art of everyone learning from everyone

                                   

George Steiner persuades us that the main task facing Europe today is not of a military or economic nature, but rather a “spiritual and intellectual one”:

“The sacredness of the smallest details” is how William Blake would have called the spirit of Europe. It often turns out that in the matter of diversity of language, culture and society, very small distance, in the order of twenty kilometers, divides two completely different worlds… Europe will perish if it does not fight for its languages, local traditions and social autonomy. If it forgets that “God dwells in details”.

We find similar thoughts in the literary oeuvre of Hans-Georg Gadamer. Of Europe’s exceptional virtues, it is diversity, the wealth of variety, that he places above all others. Abundance of diversity is deemed by him as the most precious treasure which Europe managed to save from the conflagrations of the past, to offer to the world today.

To live with Another, live as Another for Another, is the fundamental task of man - both on the highest and the lowest level… therein perhaps dwells that specific advantage of Europe, which could and had to learn the art of living with others.

Friends and neighbours

In Europe, as nowhere else, “Another” always lived very close, within sight or within touch; metaphorically for certain, since always in closeness of spirit - but often also literally, in a corporeal sense. Here “Another” is the closest neighbour, and so Europeans must negotiate conditions of this neighbourhood despite the differences which divide them. European landscape says Gadamer, characterized by polyglotism and close proximity of “Another” in a severely restricted space, can be seen as a research laboratory, or a school, from which the rest of the world can carry away knowledge or skills which determine our survival or doom.

It is impossible to underestimate the weight of this task, or the determination with which Europe should undertake it, if (to echo Gadamer once more) the condition sine qua non, necessary for the solution of life problems of the contemporary world, is friendship and “cheerful solidarity”.

Upon undertaking this task, we can, and should, look for inspiration to shared European heritage: for the ancient Greeks, the word “friend”, according to Gadamer, described the ”totality of social life”. Friends are people capable and desirous of an amiable mutual relationship unconcerned by the differences between them, and keen to help one another on account of those differences; capable and willing to act with kindliness and generosity without letting go of their distinctness - at the same time taking care that that distinctness should not create a distance between them, or turn them against one another.

Fusion of Horizons

It would follow from all of the above, that all of us Europeans, precisely because of the many differences between us and the differences with which we endow our shared European home in terms of the variety of experiences and forms of life shaped by them, are perfectly suited to become friends in the sense given to friendship by Ancient Greeks, the fore-fathers of Europe: not by sacrificing that which is dear to our hearts, but by offering it to neighbours near and far, just as they offer us, as generously, that which is dear to their hearts.

Gadamer pointed out that the path to understanding leads through a “fusion of horizons”. If that which each human agglomeration regards as truth, is the basis of their collective experience, then the horizons surrounding their field of vision are also the boundaries of collective truths.

The European Union is our chance of such a fusion. It is after all our shared laboratory, in which, consciously or not, willingly or not, we fuse group horizons, widening them all in the process. To use a metaphor different to Gadamer’s: by joint effort and for the benefit of all, we forge out of the great variety of types of ore we bring into the laboratory, an amalgam of values, ideals and intentions, which may be agreeable and useful to all; If all goes well, it may display our shared values, ideals and intentions. And it just so happens, even unbeknown to us, that in the course of all this work, each ore becomes finer and more valuable - which we will sooner or later, inevitably acknowledge for ourselves.

Wisdom lost in Translation

This is protracted work, its progress slow, fast results are not to be expected.
But the process could be quickened, and results achieved faster, by consciously and consistently helping the horizons to fuse. Nothing stands in the way of fusion and nothing slows it down as much as the confusion of languages inherited from those who built the Tower of Babel. European Union acknowledged as “official” as many as 23 languages. But in the different countries of the Union, people read, write and think in Catalan, Basque, Welsh, Breton, Scottish ( Gaelic), Kashubian, Lappish, Roma, a host of provincial Italian (apologies for the inevitable omissions - impossible to list them all…).

So much inaccessible human wisdom hides in the experiences written in foreign dialect. One of the most significant, though by no means the only component of this hidden wisdom, is the awareness of how astonishingly similar are the cares, hopes and experiences of parents, children, spouses and neighbours, bosses and subordinates, “ insiders” and “outsiders”, friends and enemies - no matter in what language they were described…

A pressing, if after all rhetorical question, comes to mind: how much wisdom we would have all gained, how would our co-existence have benefited, had part of Union’s funds been devoted to the translation of members’ writings… Personally I am convinced that it would have been perhaps the best investment into the future of Europe and the success of its mission.”

Zygmunt Bauman, world-renowned Polish sociologist and philosopher, one of the creators of the “postmodernism” concept. Professor of sociology at the University of Leeds, Culture in Modern Liquid Times, 2011 (Translated by Lydia Bauman), cited in European Culture Congress, Sep 8, 2011 (Illustration source)

See also:

Retransmission of Prof. Zygmunt Bauman’s Lecture at European Culture Congress, Sep 8, 2011, Wrocław, Poland (video)
In an era of global interconnectedness, what is the nature of cross-cultural exchange?
Zygmunt Bauman: ‘Modern society stopped questioning itself’