Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Mar
3rd
Sun
permalink

Rolf Fobelli: News is to the mind what sugar is to the body

   image

"We humans seem to be natural-born signal hunters, we’re terrible at regulating our intake of information. We’ll consume a ton of noise if we sense we may discover an added ounce of signal. So our instinct is at war with our capacity for making sense.”

Nicholas Carr, A little more signal, a lot more noise, Rough Type, May 30, 2012.

"When people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix. The world becomes Keats’s “waking dream,” as the writer Kevin Kelly puts it.”

Adam Gopnik on The Information and How the Internet gets inside us, 2011

"Our brains are wired to pay attention to visible, large, scandalous, sensational, shocking, peoplerelated, story-formatted, fast changing, loud, graphic onslaughts of stimuli. Our brains have limited attention to spend on more subtle pieces of intelligence that are small, abstract, ambivalent, complex, slow to develop and quiet, much less silent. News organizations systematically exploit this bias. News media outlets, by and large, focus on the highly visible. They display whatever information they can convey with gripping stories and lurid pictures, and they systematically ignore the subtle and insidious, even if that material is more important. News grabs our attention; that’s how its business model works. Even if the advertising model didn’t exist, we would still soak up news pieces because they are easy to digest and superficially quite tasty. The highly visible misleads us. (…)

  • Terrorism is overrated. Chronic stress is underrated.
  • The collapse of Lehman Brothers is overrated. Fiscal irresponsibility is underrated.
  • Astronauts are overrated. Nurses are underrated.
  • Britney Spears is overrated. IPCC reports are underrated.
  • Airplane crashes are overrated. Resistance to antibiotics is underrated.

(…)

Afraid you will miss “something important”? From my experience, if something really important happens, you will hear about it, even if you live in a cocoon that protects you from the news. Friends and colleagues will tell you about relevant events far more reliably than any news organization. They will fill you in with the added benefit of meta-information, since they know your priorities and you know how they think. You will learn far more about really important events and societal shifts by reading about them in specialized journals, in-depth magazines or good books and by talking to the people who know. (…)

The more “news factoids” you digest, the less of the big picture you will understand. (…)

Thinking requires concentration. Concentration requires uninterrupted time. News items are like free-floating radicals that interfere with clear thinking. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. (…)

This is about the inability to think clearly because you have opened yourself up to the disruptive factoid stream. News makes us shallow thinkers. But it’s worse than that. News severely affects memory. (…)

News is an interruption system. It seizes your attention only to scramble it. Besides a lack of glucose in your blood stream, news distraction is the biggest barricade to clear thinking. (…)

In the words of Professor Michael Merzenich (University of California, San Francisco), a pioneer in the field of neuroplasticity: “We are training our brains to pay attention to the crap.” (…)

Good professional journalists take time with their stories, authenticate their facts and try to think things through. But like any profession, journalism has some incompetent, unfair practitioners who don’t have the time – or the capacity – for deep analysis. You might not be able to tell the difference between a polished professional report and a rushed, glib, paid-by-the-piece article by a writer with an ax to grind. It all looks like news.

My estimate: fewer than 10% of the news stories are original. Less than 1% are truly investigative. And only once every 50 years do journalists uncover a Watergate.

Many reporters cobble together the rest of the news from other people’s reports, common knowledge, shallow thinking and whatever the journalist can find on the internet. Some reporters copy from each other or refer to old pieces, without necessarily catching up with any interim corrections. The copying and the copying of the copies multiply the flaws in the stories and their irrelevance. (…)

Overwhelming evidence indicates that forecasts by journalists and by experts in finance, social development, global conflicts and technology are almost always completely wrong. So, why consume that junk?

Did the newspapers predict World War I, the Great Depression, the sexual revolution, the fall of the Soviet empire, the rise of the Internet, resistance to antibiotics, the fall of Europe’s birth rate or the explosion in depression cases? Maybe, you’d find one or two correct predictions in a sea of millions of mistaken ones. Incorrect forecast are not only useless, they are harmful.

To increase the accuracy of your predictions, cut out the news and roll the dice or, if you are ready for depth, read books and knowledgeable journals to understand the invisible generators that affect our world. (…)

I have now gone without news for a year, so I can see, feel and report the effects of this freedom first hand: less disruption, more time, less anxiety, deeper thinking, more insights. It’s not easy, but it’s worth it.”

Table of Contents:

No 1 – News misleads us systematically
No 2 – News is irrelevant
No 3 – News limits understanding
No 4 – News is toxic to your body
No 5 – News massively increases cognitive errors
No 6 – News inhibits thinking
No 7 – News changes the structure of your brain
No 8 – News is costly
No 9 – News sunders the relationship between reputation and achievement
No 10 – News is produced by journalists
No 11 – Reported facts are sometimes wrong, forecasts always
No 12 – News is manipulative
No 13 – News makes us passive
No 14 – News gives us the illusion of caring
No 15 – News kills creativity

Rolf Dobelli, Swiss novelist, writer, entrepreneur and curator of zurich.minds, to read full essay click Avoid News. Towards a Healthy News Diet (pdf), 2010. (Illustration: Information Overload by taylorboren)

See also:

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Nicholas Carr on the evolution of communication technology and our compulsive consumption of information
Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
How the Internet Affects Our Memories: Cognitive Consequences of Having Information at Our Fingertips
☞ Dr Paul Howard-Jones, The impact of digital technologies on human wellbeing (pdf), University of Bristol
William Deresiewicz on multitasking and the value of solitude
Information tag on Lapidarium

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

Dec
11th
Tue
permalink

Researchers discover surprising complexities in the way the brain makes mental maps

                     image
Spatial location is closely connected to the formation of new memories. Until now, grid cells were thought to be part of a single unified map system. New findings from the Norwegian University of Science and Technology demonstrate that the grid system is in fact composed of a number of independent grid maps, each with unique properties. Each map displays a particular resolution (mesh size), and responds independently to changes in the environment. A system of several distinct grid maps (illustrated on left) can support a large number of unique combinatorial codes used to associate new memories formed with specific spatial information (illustrated on right).

Your brain has at least four different senses of location – and perhaps as many as 10. And each is different, according to new research from the Kavli Institute for Systems Neuroscience, at the Norwegian University of Science and Technology. (…)

The findings, published in the 6 December 2012 issue of Nature, show that rather than just a single sense of location, the brain has a number of “modules” dedicated to self-location. Each module contains its own internal GPS-like mapping system that keeps track of movement, and has other characteristics that also distinguishes one from another.

"We have at least four senses of location," says Edvard Moser, director of the Kavli Institute. "Each has its own scale for representing the external environment, ranging from very fine to very coarse. The different modules react differently to changes in the environment. Some may scale the brain’s inner map to the surroundings, others do not. And they operate independently of each other in several ways."

This is also the first time that researchers have been able to show that a part of the brain that does not directly respond to sensory input, called the association cortex, is organized into modules. The research was conducted using rats. (…)

Technical breakthroughs

A rat’s brain is the size of a grape, while the area that keeps track of the sense of location and memory is comparable in size to a small grape seed. This tiny area holds millions of nerve cells.

A research team of six people worked for more than four years to acquire extensive electrophysiological measurements in this seed-sized region of the brain. New measurement techniques and a technical breakthrough made it possible for Hanne Stensola and her colleagues to measure the activity in as many as 186 grid cells of the same rat brain. A grid cell is a specialized cell named for its characteristic of creating hexagonal grids in the brain’s mental map of its surroundings.

"We knew that the ‘grid maps’ in this area of the brain had resolutions covering different scales, but we did not know how independent the scales were of each other," Stensola said. "We then discovered that the maps were organized in four to five modules with different scales, and that each of these modules reacted slightly differently to changes in their environment. This independence can be used by the brain to create new combinations - many combinations - which is a very useful tool for memory formation.

After analysing the activity of nearly 1000 grid cells, researchers were able to conclude that the brain has not just one way of making an internal map of its location, but several. Perhaps 10 different senses of location.

Perhaps 10 different senses of location

image
The entorhinal cortex is a part of the neocortex that represents space by way of brain cells that have GPS-like properties. Each cell describes the environment as a hexagonal grid mesh, earning them the name ‘grid cells’. The panels show a bird’s-eye view of a rat’s recorded movements (grey trace) in a 2.2x2.2 m box. Each panel shows the activity of one grid cell (blue dots) with a particular map resolution as the animal moved through the environment. Credit: Kavli Institute for Systems Neuroscience, NTNU

Institute director Moser says that while researchers are able to state with confidence that there are at least four different location modules, and have seen clear evidence of a fifth, there may be as many as 10 different modules.

He says, however, that researchers need to conduct more measurements before they will have covered the entire grid-cell area. “At this point we have measured less than half of the area,” he says.

Aside from the time and challenges involved in making these kinds of measurements, there is another good reason why researchers have not yet completed this task. The lower region of the sense of location area, the entorhinal cortex, has a resolution that is so coarse or large that it is virtually impossible to measure it.

"The thinking is that the coordinate points for some of these maps are as much as ten metres apart," explains Moser. "To measure this we would need to have a lab that is quite a lot larger and we would need time to test activity over the entire area. We work with rats, which run around while we make measurements from their brain. Just think how long it would take to record the activity in a rat if it was running back and forth exploring every nook and cranny of a football field. So you can see that we have some challenges here in scaling up our experiments."

New way to organize

Part of what makes the discovery of the grid modules so special is that it completely changes our understanding of how the brain physically organizes abstract functions. Previously, researchers have shown that brain cells in sensory systems that are directly adjacent to each other tend to have the same response pattern. This is how they have been able to create detailed maps of which parts of the sensory brain do what.

The new research shows that a modular organization is also found in the highest parts of the cortex, far away from areas devoted to senses or motor outputs. But these maps are different in the sense that they overlap or infiltrate other. It is thus not possible to locate the different modules with a microscope, because the cells that work together are intermingled with other modules in the same area.

“The various components of the grid map are not organized side by side,” explains Moser. “The various components overlap. This is the first time a brain function has been shown to be organized in this way at separate scales. We have uncovered a new way for neural network function to be distributed.”

A map and a constant

The researchers were surprised, however, when they started calculating the difference between the scales. They may have discovered an ingenious mathematical coding system, along with a number, a constant. (Anyone who has read or seen “The Hitchhiker’s Guide to the Galaxy” may enjoy this.) The scale for each sense of location is actually 42% larger than the previous one. “

We may not be able to say with certainty that we have found a mathematical constant for the way the brain calculates the scales for each sense of location, but it’s very funny that we have to multiply each measurement by 1.42 to get the next one. That is approximately equal to the square root of the number two,” says Moser.

Maps are genetically encoded

Moser thinks it is striking that the relationship between the various functional modules is so orderly. He believes this orderliness shows that the way the grid map is organized is genetically built in, and not primarily the result of experience and interaction with the environment.

So why has evolution equipped us with four or more senses of location?

Moser believes the ability to make a mental map of the environment arose very early in evolution. He explains that all species need to navigate, and that some types of memory may have arisen from brain systems that were actually developed for the brain’s sense of location.

“We see that the grid cells that are in each of the modules send signals to the same cells in the hippocampus, which is a very important component of memory,” explains Moser. “This is, in a way, the next step in the line of signals in the brain. In practice this means that the location cells send a different code into the hippocampus at the slightest change in the environment in the form of a new pattern of activity. So every tiny change results in a new combination of activity that can be used to encode a new memory, and, with input from the environment, becomes what we call memories.”

Researchers discover surprising complexities in the way the brain makes mental maps, Medical press, Dec 5, 2012.

The article is a part of doctoral research conducted by Hanne and Tor Stensola, and has been funded through an Advanced Investigator Grant that Edvard Moser was awarded by the European Research Council (ERC).

See also:

☞ Hanne Stensola, Tor Stensola, Trygve Solstad, Kristian Frøland, May-Britt Moser & Edvard I. Moser, The entorhinal grid map is discretized, Nature, 5 Dec 2012.
Mind & brain tag on Lapidarium notes

Nov
18th
Sun
permalink

Human Brain Is Wired for Harmony


Musical score based on the neurological activity of a 31-year-old woman. Image: Lu et al./PLoS One

"Since the days of the ancient Greeks, scientists have wondered why the ear prefers harmony. Now, scientists suggest that the reason may go deeper than an aversion to the way clashing notes abrade auditory nerves; instead, it may lie in the very structure of the ear and brain, which are designed to respond to the elegantly spaced structure of a harmonious sound. (…) If the chord is harmonic, or “consonant,” the notes are spaced neatly enough so that the individual fibers of the auditory nerve carry specific frequencies to the brain. By perceiving both the parts and the harmonious whole, the brain responds to what scientists call harmonicity. (…)

“Beating is the textbook explanation for why people don’t like dissonance, so our study is the first real evidence that goes against this assumption” (…) It suggests that consonance rests on the perception of harmonicity, and that, when questioning the innate nature of these preferences, one should study harmonicity and not beating.” (…)

Sensitivity to harmonicity is important in everyday life, not just in music,” he notes. For example, the ability to detect harmonic components of sound allows people to identify different vowel sounds, and to concentrate on one conversation in a noisy crowd.”

See also:

☞ M.Cousineaua, J. H. McDermottb, I. Peretz, The basis of musical consonance as revealed by congenital amusia (2012)
☞ S.Leinoa, E. Bratticob, M.Tervaniemib, P. Vuust, Representation of harmony rules in the humanbrain: Further evidence from event-related potentials, 2007
☞ Brandon Keim, Listen: The Music of a Human Brain, Wired Science, Nov 15, 2012.

Aug
22nd
Wed
permalink

The Nature of Consciousness: How the Internet Could Learn to Feel

       

“The average human brain has a hundred billion neurons and synapses on the order of a hundred trillion or so. But it’s not just sheer numbers. It’s the incredibly complex and specific ways in which these things are wired up. That’s what makes it different from a gigantic sand dune, which might have a billion particles of sand, or from a galaxy. Our Milky Way, for example, contains a hundred billion suns, but the way these suns interact is very simple compared to the way neurons interact with each other. (…)

It doesn’t matter so much that you’re made out of neurons and bones and muscles. Obviously, if we lose neurons in a stroke or in a degenerative disease like Alzheimer’s, we lose consciousness. But in principle, what matters for consciousness is the fact that you have these incredibly complicated little machines, these little switching devices called nerve cells and synapses, and they’re wired together in amazingly complicated ways.

The Internet now already has a couple of billion nodes. Each node is a computer. Each one of these computers contains a couple of billion transistors, so it is in principle possible that the complexity of the Internet is such that it feels like something to be conscious. I mean, that’s what it would be if the Internet as a whole has consciousness. Depending on the exact state of the transistors in the Internet, it might feel sad one day and happy another day, or whatever the equivalent is in Internet space. (…)

What I’m serious about is that the Internet, in principle, could have conscious states. Now, do these conscious states express happiness? Do they express pain? Pleasure? Anger? Red? Blue? That really depends on the exact kind of relationship between the transistors, the nodes, the computers. It’s more difficult to ascertain what exactly it feels. But there’s no question that in principle it could feel something. (…)

Q: Would humans recognize that certain parts of the Internet are conscious? Or is that beyond our understanding?

That’s an excellent question. If we had a theory of consciousness, we could analyze it and say yes, this entity, this simulacrum, is conscious. Or because it displays independent behavior. At some point, suddenly it develops some autonomous behavior that nobody programmed into it, right? Then, people would go, “Whoa! What just happened here?” It just sort of self-organized in some really weird way. It wasn’t a bug. It wasn’t a virus. It wasn’t a botnet that was paid for by some nefarious organization. It did it by itself. If this autonomous behavior happens on a regular basis, then I think many people would say, yeah, I guess it’s alive in some sense, and it may have conscious sensation. (…)

Q: How do you define consciousness?

Typically, it means having subjective states. You see something. You hear something. You’re aware of yourself. You’re angry. You’re sad. Those are all different conscious states. Now, that’s not a very precise definition. But if you think historically, almost every scientific field has a working definition and the definitions are subject to change. For example, my Caltech colleague Michael Brown has redefined planets. So Pluto is not a planet anymore, right? Because astronomers got together and decided that. And what’s a gene? A gene is very tricky to define. Over the last 50 years, people have had all sorts of changing definitions. Consciousness is not easy to define, but don’t worry too much about the definition. Otherwise, you get trapped in endless discussions about what exactly you mean. It’s much more important to have a working definition, run with it, do experiments, and then modify it as necessary. (…)

I see a universe that’s conducive to the formation of stable molecules and to life. And I do believe complexity is associated with consciousness. Therefore, we seem to live in a universe that’s particularly conducive to the emergence of consciousness. That’s why I call myself a “romantic reductionist.”

Christof Koch, American neuroscientist working on the neural basis of consciousness, Professor of Cognitive and Behavioral Biology at California Institute of Technology, The Nature of Consciousness: How the Internet Could Learn to Feel, The Atlantic, Aug 22, 2012. (Illustration: folkert: Noosphere)

See also:

Google and the Myceliation of Consciousness
Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe
Consciousness tag on Lapidarium

Jul
21st
Sat
permalink

What Neuroscience Tells Us About Morality: 'Morality is a form of decision-making, and is based on emotions, not logic'

           

Morality is not the product of a mythical pure reason divorced from natural selection and the neural wiring that motivates the animal to sociability. It emerges from the human brain and its responses to real human needs, desires, and social experience; it depends on innate emotional responses, on reward circuitry that allows pleasure and fear to be associated with certain conditions, on cortical networks, hormones and neuropeptides. Its cognitive underpinnings owe more to case-based reasoning than to conformity to rules.”

Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in John Bickle, The Oxford Handbook of Philosophy and Neuroscience, Chapter 16 "Inference to the best decision", Oxford Handbooks, 2009, p.419.

"Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind."

Patricia Smith Churchland, introductory message at her homepage at the University of California, San Diego.

"Morality is a form of decision-making, and is based on emotions, not logic."

Jonah Lehrer, cited in delancey place, 2009

"Philosophers must take account of neuroscience in their investigations.

While [Patricia S.] Churchland's intellectual opponents over the years have suggested that you can understand the “software” of thinking, independently of the “hardware”—the brain structure and neuronal firings—that produced it, she has responded that this metaphor doesn't work with the brain: Hardware and software are intertwined to such an extent that all philosophy must be “neurophilosophy.” There’s no other way.

Churchland, professor emerita of philosophy at the University of California at San Diego, has been best known for her work on the nature of consciousness. But now, with a new book, Braintrust: What Neuroscience Tells Us About Morality (Princeton University Press), she is taking her perspective into fresh terrain: ethics. And the story she tells about morality is, as you’d expect, heavily biological, emphasizing the role of the peptide oxytocin, as well as related neurochemicals.

Oxytocin’s primary purpose appears to be in solidifying the bond between mother and infant, but Churchland argues—drawing on the work of biologists—that there are significant spillover effects: Bonds of empathy lubricated by oxytocin expand to include, first, more distant kin and then other members of one’s in-group. (Another neurochemical, aregenine vasopressin, plays a related role, as do endogenous opiates, which reinforce the appeal of cooperation by making it feel good.)

The biological picture contains other elements, of course, notably our large prefrontal cortexes, which help us to take stock of situations in ways that lower animals, driven by “fight or flight” impulses, cannot. But oxytocin and its cousin-compounds ground the human capacity for empathy. (When she learned of oxytocin’s power, Churchland writes in Braintrust, she thought: “This, perhaps, Hume might accept as the germ of ‘moral sentiment.’”)

From there, culture and society begin to make their presence felt, shaping larger moral systems: tit-for-tat retaliation helps keep freeloaders and abusers of empathic understanding in line. Adults pass along the rules for acceptable behavior—which is not to say “just” behavior, in any transcendent sense—to their children. Institutional structures arise to enforce norms among strangers within a culture, who can’t be expected to automatically trust each other.

These rules and institutions, crucially, will vary from place to place, and over time. “Some cultures accept infanticide for the disabled or unwanted,” she writes, without judgment. “Others consider it morally abhorrent; some consider a mouthful of the killed enemy’s flesh a requirement for a courageous warrior, others consider it barbaric.”

Hers is a bottom-up, biological story, but, in her telling, it also has implications for ethical theory. Morality turns out to be not a quest for overarching principles but rather a process and practice not very different from negotiating our way through day-to-day social life. Brain scans, she points out, show little to no difference between how the brain works when solving social problems and how it works when solving ethical dilemmas. (…)

[Churchland] thinks, with Aristotle’s argument that morality is not about rule-making but instead about the cultivation of moral sentiment through experience, training, and the following of role models. The biological story also confirms, she thinks, David Hume’s assertion that reason and the emotions cannot be disentangled. This view stands in sharp contrast to those philosophers who argue that instinctual reactions must be scrutinized by reason. The villains of her books are philosophical system-builders—whether that means Jeremy Bentham, with his ideas about maximizing aggregate utility (“the greatest good for the greatest number”), or Immanuel Kant, with his categorical imperatives (never lie!), or John Rawls, erector of A Theory of Justice.

Churchland thinks the search for what she invariably calls “exceptionless rules” has deformed modern moral philosophy. “There have been a lot of interesting attempts, and interesting insights, but the target is like perpetual youth or a perpetual-motion machine. You’re not going to find an exceptionless rule,” she says. “What seems more likely is that there is a basic platform that people share and that things shape themselves based on that platform, and based on ecology, and on certain needs and certain traditions.”

The upshot of that approach? “Sometimes there isn’t an answer in the moral domain, and sometimes we have to agree to disagree, and come together and arrive at a good solution about what we will live with.”

Owen Flanagan Jr., a professor of philosophy and neurobiology at Duke University and a friend of Churchland’s, adds, “There’s a long tradition in philosophy that morality is based on rule-following, or on intuitions that only specially positioned people can have. One of her main points is that that is just a completely wrong picture of the genealogical or descriptive story. The first thing to do is to emphasize our continuity with the animals.” In fact, Churchland believes that primates and even some birds have a moral sense, as she defines it, because they, too, are social problem-solvers.

Recognizing our continuity with a specific species of animal was a turning point in her thinking about morality, in recognizing that it could be tied to the hard and fast. “It all changed when I learned about the prairie voles,” she says—surely not a phrase John Rawls ever uttered.

She told the story at the natural-history museum, in late March. Montane voles and prairie voles are so similar “that naifs like me can’t tell them apart,” she told a standing-room-only audience (younger and hipper than the museum’s usual patrons—the word “neuroscience” these days is like catnip). But prairie voles mate for life, and montane voles do not. Among prairie voles, the males not only share parenting duties, they will even lick and nurture pups that aren’t their own. By contrast, male montane voles do not actively parent even their own offspring. What accounts for the difference? Researchers have found that the prairie voles, the sociable ones, have greater numbers of oxytocin receptors in certain regions of the brain. (And prairie voles that have had their oxytocin receptors blocked will not pair-bond.)

"As a philosopher, I was stunned," Churchland said, archly. "I thought that monogamous pair-bonding was something one determined for oneself, with a high level of consideration and maybe some Kantian reasoning thrown in. It turns out it is mediated by biology in a very real way.”

The biologist Sue Carter, now at the University of Illinois at Chicago, did some of the seminal work on voles, but oxytocin research on humans is now extensive as well. In a study of subjects playing a lab-based cooperative game in which the greatest benefits to two players would come if the first (the “investor”) gave a significant amount of money to the second (the “trustee”), subjects who had oxytocin sprayed into their noses donated more than twice as often as a control group, giving nearly one-fifth percent more each time.

Paul Zak, an economist at Claremont Graduate University, was an author of that study, as well as others that Churchland cites. He is working on a book called “The Moral Molecule” and describes himself as “in exactly the same camp” as Churchland.

Oxytocin works on the level of emotion,” he says. “You just get the feeling of right and wrong. It is less precise than a Kantian system, but it’s consistent with our evolved physiology as social creatures.”

The City University of New York Graduate Center philosopher Jesse Prinz, who appeared with Churchland at a Columbia University event the night after her museum lecture, has mostly praise for Churchland’s latest offering. “If you look at a lot of the work that’s been done on scientific approaches to morality—books written for a lay audience—it’s been about evolutionary psychology. And what we get again and again is a story about the importance of evolved tendencies to be altruistic. That’s a report on a particular pattern of behavior, and an evolutionary story to explain the behavior. But it’s not an account of the underlying mechanism. The idea that science has moved to a point where we can see two animals working together toward a collective end and know the brain mechanism that allows that is an extraordinary achievement.”

Nevertheless, he says, how to move from the possibility of collective action to “the specific human institution of moral rules is a bit of connective tissue that she isn’t giving us.”

Indeed, that’s one of the most striking aspects of Braintrust. After Churchland establishes the existence of a platform for moral decision-making, she describes the process through which moral decisions come to be made, but she says little about their content—why one path might be better than another. She offers the following description of a typical “moral” scenario. A farmer sees a deer breaching his neighbor’s fence and eating his apples while the neighbor is away. The farmer will not consult a Kantian rule book before deciding whether to help, she writes, but instead will weigh an array of factors: Would I want my neighbor to help me? Does my culture find such assistance praiseworthy or condescending? Am I faced with any pressing emergencies on my own farm? Churchland describes this process of moral decision-making as being driven by “constraint satisfaction.”

"What exactly constraint satisfaction is in neurobiological terms we do not yet understand,” she writes, “but roughly speaking it involves various factors with various weights and probabilities interacting so as to produce a suitable solution to a question.”

"Various" factors with "various" weights? Is that not a little vague? But Duke’s Owen Flanagan Jr. defends this highly pragmatic view of morality. "Where we get a lot of pushback from philosophers is that they’ll say, ‘If you go this naturalistic route that Flanagan and Churchland go, then you make ethics merely a theory of prudence.’ And the answer is, Yeah, you kind of do that. Morality doesn’t become any different than deciding what kind of bridge to build across a river. The reason we both think it makes sense is that the other stories”—that morality comes from God, or from philosophical intuition—”are just so implausible.”

Flanagan also thinks Churchland’s approach leads to a “more democratic” morality. "It’s ordinary people discussing the best thing to do in a given situation, given all the best information available at the moment." Churchland herself often underscores that democratic impulse, drawing on her own biography. She grew up on a farm, in the Okanagan Valley, in British Columbia. Speaking of her onetime neighbors, she says: "I got as much wisdom from some of those old farmers as I ever got from a seminar on moral philosophy.”

If building a bridge is the topic up for discussion, however, one can assume that most people think getting across the water is a sound idea. Yet mainstream philosophers object that such a sense of shared purpose cannot always be assumed in moral questions—and that therefore the analogy fails. (…)

Kahane says the complexity of human life demands a more intense and systematic analysis of moral questions than the average citizen might be capable of, at least if she’s limited to the basic tool kit of social skills.

Peter Railton, a philosophy professor at the University of Michigan at Ann Arbor, agrees. Our intuitions about how to get along with other people may have been shaped by our interactions within small groups (and between small groups). But we don’t live in small groups anymore, so we need some procedures through which we leverage our social skills into uncharted areas—and that is what the traditional academic philosophers, whom Churchland mostly rejects, work on. What are our obligations to future generations (concerning climate change, say)? What do we owe poor people on the other side of the globe (whom we might never have heard of, in our evolutionary past)?

For a more rudimentary example, consider that evolution quite likely trained us to treat “out groups” as our enemy. Philosophical argument, Railton says, can give reasons why members of the out-group are not, in fact, the malign and unusual creatures that we might instinctively think they are; we can thereby expand our circle of empathy.

Churchland’s response is that someone is indeed likely to have the insight that constant war against the out-group hurts both sides’ interests, but she thinks a politician, an economist, or a farmer-citizen is as likely to have that insight as a professional philosopher. (…)

But isn’t she, right there, sneaking in some moral principles that have nothing to do with oxytocin, namely the primacy of liberty over equality? In our interviews, she described Singer’s worldview as, in an important sense, unnatural. Applying the same standard to distant foreigners as we do to our own kith and kin runs counter to our most fundamental biological impulses.

But Oxford’s Kahane offers a counterargument: “‘Are humans capable of utilitarianism?’ is not a question that is answered by neuroscience,” he says. “We just need to test if people are able to live like that. Science may explain whether it is common for us to do, but that’s very different from saying what our limits are.”

Indeed, Peter Singer lives (more or less) the way he preaches, and chapters of an organization called Giving What We Can, whose members pledge to give a large portion of their earnings to charity, have popped up on several campuses. “If I can prevent hundreds of people from dying while still having the things that make life meaningful to me, that strikes me as a good idea that doesn’t go against ‘paradigmatically good sense’ or anything,” says Nick Beckstead, a fourth-year graduate student in philosophy and a founder of the group’s Rutgers chapter.

Another target in Churchland’s book is Jonathan Haidt, the University of Virginia psychologist who thinks he has identified several universal “foundations” of moral thought: protection of society’s vulnerable; fairness; loyalty to the in-group; respect for authority; and the importance of purity (a sanitary concern that evolves into the cultural ideal of sanctity). That strikes her as a nice list, but no more—a random collection of moral qualities that isn’t at all rooted in biology. During her museum talk, she described Haidt’s theory as a classic just-so story. “Maybe in the 70s, when evolutionary psychology was just becoming a thing, you could get away with saying”—here she adopted a flighty, sing-song voice—’It could have been, out there on the veldt, in Africa, 250,000 years ago that these were traits that were selected,’” she said. “But today you need evidence, actually.” (…)

The element of cultural relativism also remains somewhat mysterious in Churchland’s writings on morality. In some ways, her project dovetails with that of Sam Harris, the “New Atheist” (and neuroscience Ph.D.) who believes reason and neuroscience can replace woolly armchair philosophy and religion as guides to morality. But her defense of some practices of primitive tribes, including infanticide (in the context of scarcity) —as well the seizing of enemy women, in raids, to keep up the stock of mates— as “moral” within their own context, seems the opposite of his approach.

I reminded Churchland, who has served on panels with Harris, that he likes to put academics on the spot by asking if they think such practices as the early 19th-century Hindu tradition of burning widows on their husbands’ funeral pyres was objectively wrong.

So did she think so? First, she got irritated: “I don’t know why you’re asking that.” But, yes, she finally said, she does think that practice objectively wrong. “But frankly I don’t know enough about their values, and why they have that tradition, and I’m betting that Sam doesn’t either.”

"The example I like to use," she said, "rather than using an example from some other culture and just laughing at it, is the example from our own country, where it seems to me that the right to buy assault weapons really does not work for the well-being of most people. And I think that’s an objective matter."

At times, Churchland seems just to want to retreat from moral philosophical debate back to the pure science. “Really,” she said, “what I’m interested in is the biological platform. Then it’s an open question how we attack more complex problems of social life.”

— Christopher Shea writing about Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in Rule Breaker, The Chronicle of Higher Education, June 12, 2011. (Illustration: attributed to xkcd)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Sam Harris on the ‘selfish gene’ and moral behavior
Sam Harris on the moral formula: How facts inform our ethics
Morality tag on Lapidarium

Jun
20th
Wed
permalink

The crayola-fication of the world: How we gave colors names, and it messed with our brains

     

"We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language (…) all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar”

Benjamin Whorf, Science and linguistics, first published in 1940 in MIT Technology Review [See also: linguistic relativity]

"The implication is that language may affect how we see the world. Somehow, the linguistic distinction between blue and green may heighten the perceived difference between them. (…)

If you have a word to distinguish two colors, does that make you any better at telling them apart? More generally, does the linguistic baggage that we carry effect how we perceive the world? This study was designed to address Whorf’s idea head on.

As it happens, Whorf was right. Or rather, he was half right.

The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green.  This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).

However, and this is where things start to get a bit odd, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (…), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green.  It seems that color categories only matter in the right half of your visual field! (…)

It’s easier to tell apart colors with different names, but only if they are to your right. Keep in mind that this is a very subtle effect, the difference in reaction time is a few hundredths of a second.

So what’s causing this lopsidedness?  Well, if you know something about how the brain works, you might have already guessed. The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out.

    

It’s not just English speakers that show this asymmetry. Koreans are familiar with the colors yeondu and chorok. An English speaker would call them both green (yeondu perhaps being a more yellowish green). But in Korean it’s not a matter of shade, they are both basic colors. There is no word for green that includes both yeondu and chorok.

      
To the left of the dotted line is yeondu, and to the right chorok. Is it still as easy to spot the odd square in the circle?

And so imagine taking the same color ID test, but this time with yeondu and chorok instead of blue and green. A group of researchers ran this experiment. They discovered that among those who were the fastest at identifying the odd color, English speakers showed no left brain / right brain distinction, whereas Korean speakers did. It’s plausible that their left brain was attuned to the distinction between yeondu and chorok.

But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.

They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It’s easy to spot the blue among green, so you’re faster at straddling categories.

All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions. Oddly enough, Whorf was right, but only when it comes to half your brain.

Imagine a world without color names. You lived in such a world once, when you were an infant. Do you remember what it was like? Anna Franklin is a psychologist who is particularly interested in where color categories come from. She studies color recognition in infants, as a window into how the brain organizes color.

Here she is discussing her work in this incredible clip from a BBC Horizon documentary called ‘Do you see what I see?‘. (…) It starts off with infants, and then cuts to the Himba tribe who have a highly unusual color naming system. You’ll see them taking the color wheel test, with very surprising results.

Surprisingly, many children take a remarkably long time to learn their color names. By the time they can name dozens of objects, they still struggle with basic colors. A two year old may know that a banana is yellow or an apple is red, but if you show them a blue cup, odds are even that they’ll call it red. And this confusion can persist even after encountering hundreds of examples, until as late as the age of four. There have been studies that show that very young sighted children are as likely to identify a color correctly as blind children of the same age. They rely on their experience, rather than recognize the color outright. (…)

The big question is when children learn their color words, does their perception of the world change? Anna Franklin (who we met in the video above) and colleagues took on this question. Working with toddlers aged two to four, they split them into two groups. There were the namers, who could reliably distinguish blue from green, and the politely-named learners, who couldn’t. The researchers repeated the color circle experiment on these children. Rather than have them press a button (probably not a good idea), they tracked the infants’ eyes to see how long it took them to spot the odd square. (…)

As toddlers learn the names of colors, a remarkable transformation is taking place inside their heads. Before they learn their color names, they are better at distinguishing color categories in their right brain (Left Visual Field). In a sense, their right brain understands the difference between blue and green, even before they have the words for it. But once they acquire words for blue and green, this ability jumps over to the left brain (Right Visual Field).

Think about what that means. As infant brains are rewiring themselves to absorb our visual language, the seat of categorical processing jumps hemispheres from the right brain to the left. And it stays here throughout adulthood. Their brains are furiously re-categorizing the world, until mysteriously, something finally clicks into place. So the next time you see a toddler struggling with their colors, don’t be like Darwin, and cut them some slack. They’re going through a lot.”

Aatish Bhatia, Ph.D. at Rutgers University, The crayola-fication of the world: How we gave colors names, and it messed with our brains (part II), Empirical Zeal, June 11, 2012. (Illustration by Scott Campbell).

See also:

☞ Regier, T., & Kay, P. (2009). Language, thought, and color: Whorf was half right Trends in Cognitive Sciences, Trends in Cognitive Sciences, 13 (10), 439-446 
☞ Gilbert AL, Regier T, Kay P, & Ivry RB (2006), Whorf hypothesis is supported in the right visual field but not the left, Proceedings of the National Academy of Sciences of the United States of America, 103 (2), 489-94
Aatish Bhatia, The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)

"Why is the color getting lost in translation? This visual conundrum has its roots in the history of language.  (…) What really is a color? Just like the crayons, we’re taking something that has no natural boundaries – the frequencies of visible light – and dividing into convenient packages that we give a name. (…) Languages have differing numbers of color words, ranging from two to about eleven. Yet after looking at 98 different languages, they saw a pattern. It was a pretty radical idea, that there is a certain fixed order in which these color names arise. This was a common path that languages seem to follow, a road towards increasing visual diversity. (…)

Cultures are quite different in how their words paint the world. (…) For the 110 cultures, you can see how many basic words they use for colors. To the Dani people who live in the highlands of New Guiniea, objects comes in just two shades. There’s mili for the cooler shades, from blues and greens to black, and mola for the lighter shades, like reds, yellows and white. Some languages have just three basic colors, others have 4, 5, 6, and so on. (…)

If you were a mantis shrimp, your rainbow would be unimaginably rich, with thousands, maybe tens of thousands of colors that blend together, stretching from deep reds all the way to the ultraviolet. To a mantis shrimp, our visual world is unbearably dull. (Another Radiolab plug: in their episode on Color, they use a choir to convey this idea through sound. A visual spectrum becomes a musical one. It’s one of those little touches that makes this show genius.”

Color words in different languages, Fathom, Nov 8, 2012.

Jun
3rd
Sun
permalink

Self as Symbol. The loopy nature of consciousness trips up scientists studying themselves

              
                                                          M.C. Escher’s “Drawing Hands”

"The consciousness problem remains popular on lists of problems that might never be solved.

Perhaps that’s because the consciousness problem is inherently similar to another famous problem that actually has been proved unsolvable: finding a self-consistent set of axioms for deducing all of mathematics. As the Austrian logician Kurt Gödel proved eight decades ago, no such axiomatic system is possible; any system as complicated as arithmetic contains true statements that cannot be proved within the system.

Gödel’s proof emerged from deep insights into the self-referential nature of mathematical statements. He showed how a system referring to itself creates paradoxes that cannot be logically resolved — and so certain questions cannot in principle be answered. Consciousness, in a way, is in the same logical boat. At its core, consciousness is self-referential awareness, the self’s sense of its own existence. It is consciousness itself that is trying to explain consciousness.

Self-reference, feedback loops, paradoxes and Gödel’s proof all play central roles in the view of consciousness articulated by Douglas Hofstadter in his 2007 book I Am a Strange Loop. Hofstadter is (among other things) a computer scientist, and he views consciousness through lenses unfamiliar to most neuroscientists. In his eyes, it’s not so bizarre to compare math and numbers to the mind and consciousness. Math is, after all, deeply concerned with logic and reason — the stuff of thought. Mathematical paradoxes, Hofstadter points out, open up “profound questions concerning the nature of reasoning — and thus concerning the elusive nature of thinking — and thus concerning the mysterious nature of the human mind itself.”

Enter the loop

In particular, Hofstadter seizes on Gödel’s insight that a mathematical formula — a statement about a number — can itself be represented by a number. So you can take the number describing a formula and insert that number into the formula, which then becomes a statement about itself. Such a self-referential capability introduces a certain “loopiness” into mathematics, Hofstadter notes, something like the famous Escher print of a right hand drawing a left hand, which in turn is drawing the right hand. This “strange loopiness” in math suggested to Hofstadter that something similar is going on in human thought.

So when he titled his book “I Am a Strange Loop,” Hofstadter didn’t mean that he was personally loopy, but that the concept of an individual — a persistent identity, an “I,” that accompanies what people refer to as consciousness — is a loop of a certain sort. It’s a feedback loop, like the circuit that turns a whisper into an ear-piercing screech when the microphone whispered into is too close to the loudspeaker emitting the sound.

But consciousness is more than just an ordinary feedback loop. It’s a strange loop, which Hofstadter describes as a loop capable of perceiving patterns in its environment and assigning common symbolic meanings to sufficiently similar patterns. An acoustic feedback loop generates no symbols, just noise. A human brain, though, can assign symbols to patterns. While patterns of dots on a TV screen are just dots to a mosquito, to a person, the same dots evoke symbols, such as football players, talk show hosts or NCIS agents. Floods of raw sensory data trigger perceptions that fall into categories designated by “symbols that stand for abstract regularities in the world,” Hofstadter asserts. Human brains create vast repertoires of these symbols, conferring the “power to represent phenomena of unlimited complexity and thus to twist back and to engulf themselves via a strange loop.”

Consciousness itself occurs when a system with such ability creates a higher-level symbol, a symbol for the ability to create symbols. That symbol is the self. The I. Consciousness. “You and I are mirages that perceive themselves,” Hofstadter writes.

This self-generated symbol of the self operates only on the level of symbols. It has no access to the workings of nerve cells and neurotransmitters, the microscopic electrochemical machinery of neurobiological life. The symbols that consciousness contemplates don’t look much like the real thing, the way a map of Texas conveys nothing of the grass and dirt and asphalt and bricks that cover the physical territory.

And just like a map of Texas remains remarkably stable over many decades — it doesn’t change with each new pothole in a Dallas street — human self-identity remains stable over a lifetime, despite constant changes on the micro level of proteins and cells. As an individual grows, matures, changes in many minute ways, the conscious self’s identity remains intact, just as Texas remains Texas even as new skyscrapers rise in the cities, farms grow different crops and the Red River sometimes shifts the boundary with Oklahoma a bit.

If consciousness were merely a map, a convenient shortcut symbol for a complex mess of neurobiological signaling, perhaps it wouldn’t be so hard to figure out. But its mysteries multiply because the symbol is generated by the thing doing the symbolizing. It’s like Gödel’s numbers that refer to formulas that represent truths about numbers; this self-referentialism creates unanswerable questions, unsolvable problems.

A typical example of such a Gödelian paradox is the following sentence: This sentence cannot be true.

Is that sentence true? Obviously not, because it says it isn’t true. But wait — then it is true. Except that it can’t be. Self-referential sentences seem to have it both ways — or neither way.

And so perceptual systems able to symbolize themselves — self-referential minds — can’t be explained just by understanding the parts that compose them. Simply describing how electric charges travel along nerve cells, how small molecules jump from one cell to another, how such signaling sends messages from one part of the brain to another — none of that explains consciousness any more than knowing the English alphabet letter by letter (and even the rules of grammar) will tell you the meaning of Shakespeare’s poetry.

Hofstadter does not contend, of course, that all the biochemistry and cellular communication is irrelevant. It provides the machinery for perceiving and symbolizing that makes the strange loop of consciousness possible. It’s just that consciousness does not itself deal with molecules and cells; it copes with thoughts and emotions, hopes and fears, ideas and desires. Just as numbers can represent the complexities of all of mathematics (including numbers), a brain can represent the complexities of experience (including the brain itself). Gödel’s proof showed that math is “incomplete”; it contains truths that can’t be proven. And consciousness is a truth of a sort that can’t be comprehended within a system of molecules and cells alone.

That doesn’t mean that consciousness can never be understood — Gödel’s work did not undermine human understanding of mathematics, it enriched it. And so the realization that consciousness is self-referential could also usher in a deeper understanding of what the word means — what it symbolizes.

Information handler

Viewed as a symbol, consciousness is very much like many of the other grand ideas of science. An atom is not so much a thing as an idea, a symbol for matter’s ultimate constituents, and the modern physical understanding of atoms bears virtually no resemblance to the original conception in the minds of the ancient Greeks who named them. Even Francis Crick’s gene made from DNA turned out to be much more elusive than the “unit of heredity” imagined by Gregor Mendel in the 19th century. The later coinage of the word gene to describe such units long remained a symbol; early 20th century experiments allowed geneticists to deduce a lot about genes, but nobody really had a clue what a gene was.

“In a sense people were just as vague about what genes were in the 1920s as they are now about consciousness,” Crick said in 1998. “It was exactly the same. The more professional people in the field, which was biochemistry at that time, thought that it was a problem that was too early to tackle.”

It turned out that with genes, their physical implementation didn’t really matter as much as the information storage and processing that genes engaged in. DNA is in essence a map, containing codes allowing one set of molecules to be transcribed into others necessary for life. It’s a lot easier to make a million copies of a map of Texas than to make a million Texases; DNA’s genetic mapping power is the secret that made the proliferation of life on Earth possible. Similarly, consciousness is deeply involved in representing information (with symbols) and putting that information together to make sense of the world. It’s the brain’s information processing powers that allow the mind to symbolize itself.

Koch believes that focusing on information could sharpen science’s understanding of consciousness. A brain’s ability to find patterns in influxes of sensory data, to send signals back and forth to integrate all that data into a coherent picture of reality and to trigger appropriate responses all seem to be processes that could be quantified and perhaps even explained with the math that describes how information works.

“Ultimately I think the key thing that matters is information,” Koch says. “You have these causal interactions and they can be quantified using information theory. Somehow out of that consciousness has to arrive.” An inevitable consequence of this point of view is that consciousness doesn’t care what kind of information processors are doing all its jobs — whether nerve cells or transistors.

“It’s not the stuff out of which your brain is made,” Koch says. “It’s what that stuff represents that’s conscious, and that tells us that lots of other systems could be conscious too.”

Perhaps, in the end, it will be the ability to create unmistakable features of consciousness in some stuff other than a biological brain that will signal success in the quest for an explanation. But it’s doubtful that experimentally exposing consciousness as not exclusively human will displace humankind’s belief in its own primacy. People will probably always believe that it can only be the strange loop of human consciousness that makes the world go ’round.

“We … draw conceptual boundaries around entities that we easily perceive, and in so doing we carve out what seems to us to be reality,” Hofstadter wrote. “The ‘I’ we create for each of us is a quintessential example of such a perceived or invented reality, and it does such a good job of explaining our behavior that it becomes the hub around which the rest of the world seems to rotate.”

Tom Siegfried, American journalist, author, Self as Symbol, Science News, Feb 11, 2012.

See also:

☞ Laura Sanders, Ph.D. in Molecular Biology from the University of Southern California in Los Angeles, Emblems of Awareness, Science News, Feb 11, 2012.

                                            Degress of thought

                                          (Credit: Stanford University)

"Awareness typically tracks with wakefulness — especially in normal states of consciousness (bold). People in coma or under general anesthesia score low on both measures, appearing asleep with no signs of awareness. Sometimes, wakefulness and awareness become uncoupled, such as among people in a persistent vegetative state. In this case, a person seems awake and is sometimes able to move but is unaware of the surroundings."  (…)

“Messages constantly zing around the brain in complex patterns, as if trillions of tiny balls were simultaneously dropped into a pinball machine, each with a prescribed, mission-critical path. This constant flow of information might be what creates consciousness — and interruptions might destroy it. (…)

“If you knock on a wooden table or a bucket full of nothing, you get different noises,” Massimini says. “If you knock on the brain that is healthy and conscious, you get a very complex noise.” (…)

In the same way that “life” evades a single, clear definition (growth, reproduction or a healthy metabolism could all apply), consciousness might turn out to be a collection of remarkable phenomena, Seth says. “If we can explain different aspects of consciousness, then my hope is that it will start to seem slightly less mysterious that there is consciousness at all in the universe.” (…)

Recipe for consciousness

Somehow a sense of self emerges from the many interactions of nerve cells and neurotransmitters in the brain — but a single source behind the phenomenon remains elusive.

            

                                                      Illustration: Nicolle Rager Fuller

1. Parietal cortex Brain activity in the parietal cortex is diminished by anesthetics, when people fall into a deep sleep and in people in a vegetative state or coma. There is some evidence suggesting that the parietal cortex is where first-person perspective is generated.

2. Frontal cortex Some researchers argue that parts of the frontal cortex (along with connections to the parietal cortex) are required for consciousness. But other scientists point to a few studies in which people with damaged frontal areas retain consciousness.

3. Claustrum An enigmatic, thin sheet of neural tissue called the claustrum has connections with many other regions. Though the structure has been largely ignored by modern scientists, Francis Crick became keenly interested in the claustrum’s role in consciousness just before his death in 2004.

4. Thalamus As one of the brain’s busiest hubs of activity, the thalamus is believed by many to have an important role in consciousness. Damage to even a small spot in the thalamus can lead to consciousness disorders.

5. Reticular activating system Damage to a particular group of nerve cell clusters, called the reticular activating system and found in the brain stem, can render a person comatose.”

☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity
Theories of consciousness. Make Up Your Own Mind (visualization)
Malcolm MacIver on why did consciousness evolve, and how can we modify it?
Consciousness tag on Lapidarium

May
17th
Thu
permalink

The Self Illusion: How the Brain Creates Identity

            

'The Self'

"For the majority of us the self is a very compulsive experience. I happen to think it’s an illusion and certainly the neuroscience seems to support that contention. Simply from the logical positions that it’s very difficult to, without avoiding some degree of infinite regress, to say a starting point, the trail of thought, just the fractionation of the mind, when we see this happening in neurological conditions. The famous split-brain studies showing that actually we’re not integrated entities inside our head, rather we’re the output of a multitude of unconscious processes.

I happen to think the self is a narrative, and I use the self and the division that was drawn by William James, which is the “I” (the experience of conscious self) and the “me” (which is personal identity, how you would describe yourself in terms of where are you from and everything that makes you up in your predilections and your wishes for the future). Both the “I”, who is sentient of the “me”, and the “me”, which is a story of who you are, I think are stories. They’re constructs and narratives. I mean that in a sense that a story is a reduction or at least it’s a coherent framework that has some causal kind of coherence.

When I go out and give public lectures I like to illustrate the weaknesses of the “I” by using visual illusions of the most common examples. But there are other kinds of illusions that you can introduce which just reveal to people how their conscious experience is actually really just a fraction of what’s really going on. It certainly is not a true reflection of all mechanisms that are generating. Visual illusions are very obvious in that. The thing about the visual illusion effects is that even when they’re explained to you, you can’t but help see them, so that’s interesting. You can’t divorce yourself from the mechanisms that are creating the illusion and the mind that’s experienced in the illusion.

The sense of personal identity, this is where we’ve been doing experimental work showing the importance that we place upon episodic memories, autobiographical memories. In our duplication studies for example, children are quite willing to accept that you could copy a hamster with all its physical properties that you can’t necessarily see, but what you can’t copy very easily are the episodic memories that one hamster has had.

This actually resonates with the ideas of John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. As the person loses the capacity to retrieve memories, or these memoires become distorted, then the identity of the person, the personality, can be changed, amongst other things. But certainly the memories are very important.

As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives.

The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us.       

I use the word illusion as opposed to delusion. Delusion implies mental illness, to some extent, and illusion, we’re quite happy to accept that we’re experiencing illusions, and for me the word illusion really does mean that it’s an experience that is not what it seems. I’m not denying that there is an experience. We all have this experience, and what’s more, you can’t escape it easily. I think it’s more acceptable to call it an illusion whereas there’s a derogatory nature of calling something a delusion. I suspect there’s probably a technical difference which ought to do with mental illness, but no, I think we’re all perfectly normally, experience this illusion.      

Oliver Sacks has famously written about various case studies of patients which seem so bizarre, people who have various forms of perceptual anomalies, they mistake their wife for a hat, or there are patients who can’t help but copy everything they see. I think that in many instances, because the self is so core to our normal behavior having an understanding that self is this constructive process, I think if this was something that clinicians were familiar with, then I think that would make a lot of sense.

Neuroethics

In fact, it’s not only in clinical practice, I think in a lot of things. I think neuroethics is a very interesting field. I’ve got another colleague, David Eagleman, he’s very interested in these ideas. The culpability, responsibility. We premise our legal systems on this notion there is an individual who is to be held accountable. Now, I’m not suggesting that we abandon that, and I’m not sure what you would put in its place, but I think we can all recognize that there are certain situations where we find it very difficult to attribute blame to someone. For example, famously, Charles Whitman, the Texan sniper, when they had the autopsy, they discovered a very sizeable tumor in a region of the brain which could have very much influenced his ability to control his rage. I’m not suggesting every mass murder has inoperable tumors in their brain, but it’s conceivable that there will be, with our increasing knowledge of how the brain operates, and our ability to understand it, it’s conceivable there will be more situations where the lawyers will be looking to put the blame on some biological abnormality.

Where is the line to be drawn? I think that’s a very tough one to deal with. It’s a problem that’s not going to go away. It’s something that we’re going to continually face as we start to learn more about the genetics of aggression.

There’s a lot of interest in this thing called the warrior gene. To what extent is this a gene which predisposes you to violence? Or do you need the interaction between the gene and the abusive childhood in order to get this kind of profile? So it’s not just clinicians, it’s actually just about every realm of human activity where you posit the existence of a self and individuals, and responsibility. Then it will reframe the way you think about things. Just the way that we heap blame and praise, the flip side of blaming people is that we praise individuals. But it could be, in a sense, a multitude of factors that have led them to be successful. I think that it’s a pervasive notion. Whether or not we actually change the way we do anything, I’m not so sure, because I think it would be really hard to live our lives dealing with non-individuals, trying to deal with multitude and the history that everyone brings to the table. There’s a good reason why we have this experience of the self. It’s a very sort of succinct and economical way of interacting with each other. We deal with individuals. We fall in love with individuals, not multitudes of past experiences and aspects of hidden agendas, we just pick them out. (…)

The objects are part of the extended sense of self

I keep tying this back to my issues about why certain objects are overvalued, and I happen to believe, like James again, that objects are part of the extended sense of self. We surround ourselves with objects. We place a lot of value on objects that we think are representative of our self.  (…)

We’re the only species on this planet that invests a lot of time and evaluation through our objects, and this has been something that has been with us for a very, very long time.

Think of some of the early artifacts. The difficulty would have been to make these artifacts, the time invested in these things, means that from a very early point in our civilization, or before civilization, I think the earliest pieces are probably about 90,000 years old. There are certainly older things that are tools, but pieces of artwork, about 90,000 years old. So it’s been with us a long time. And yes, some of them are obviously sacred objects, power of religious purposes and so forth. But outside of that, there’s still this sense of having materials or things that we value, and that intrigues me in so many ways. And I don’t think it’s necessarily universal as well. It’s been around a lot, but the endowment effect, for example, is not found everywhere. There’s some intriguing work coming out of Africa. 

The endowment effect is this rather intriguing idea that we will spontaneously overvalue an object as soon as we believe it’s in our possession, we don’t actually have to have it physically, just bidding on something, as soon as you make your connection to an object, then you value it more, you’ll actually remember more about it, you’ll remember objects which you think are in your possession in comparison to someone else. It gets a whole sense of attribution and value associated with it, which is one of the reasons why people never get the asking price for the things that they’re trying to sell, they always think their objects are worth more than other people are willing to pay for them.

There was the first experimental demonstration by Richard Thaler and Danny Kahneman, and the early behavioral economics, was this demonstration that if you just give people coffee cups, students, coffee cups, and then you ask them to sell it, they always ask more than what someone’s willing to pay for it. It turns out it’s not just coffee cups, it’s wine, it’s chocolate, it’s anything, basically. There’s been quite a bit of work done on the endowment effect now. As I say, it’s been looked at in different species, and the brain mechanisms of having to sell something at a lower price, like loss aversion, it’s seen as quite painful, triggers the same pain centers, if you think you’re going to lose out on a deal

What is it about the objects that give us this self-evaluated sense? Well, I think James spoke of this, again, William James commented on the way that we use objects to extend our self. Russell Belk is a marketing psychologist. He has also talked about the extended self in terms of objects. As I say, this is something that I think marketers know in that they create certain quality brands that are perceived to signal to others how good your social status is.

It’s something in us, but it may not be universal because there are tribes, there are some recent reports from nomadic tribes in central Africa, who don’t seem to have this sense of ownership. It might be a reflection more of the fact that a lot of this work has been done in the West where we’re very individualistic, and of course individualism almost creates a lot of endowment ideas and certainly supports the endowment, materialism that we see. But this is an area I’d like to do more work with because we have not found any evidence of the endowment effect in children below five, six years of age. I’m interested: is this something that just emerges spontaneously? I suspect not. I suspect this is something that culture is definitely shaping. That’s my hunch, so that’s an empirical question I need to pick apart.

The irrational superstitious behaviors

Another line of research I’ve been working on in the past five years … this was a little bit like putting the cart before the horse, so I put forward an idea, it wasn’t entirely original. It was a combination of ideas of others, most notably Pascal Boyer. Paul Bloom, to some extent, had been thinking something similar. A bunch of us were interested in why religion was around. I didn’t want to specifically focus on religion. I wanted to get to the more general point about belief because it was my hunch that even a lot of atheists or self-stated atheists or agnostics, still nevertheless entertained beliefs which were pretty irrational. I wasn’t meaning irrational in a kind of behavioral economics type of way. I meant irrational in that there were these implicit views that would violate the natural laws as we thought about them. Violations of the natural laws I see as being supernatural. That’s what makes them supernatural. I felt that this was an area worth looking at. They’d been looked at 50, 60 years ago very much in the behaviorist association tradition.

BF Skinner famously wrote a paper on the superstitious behavior of pigeons, and he argued if you simply set up a reinforcement schedule at a random kind of interval, pigeons will adopt typical patterns that they think are somehow related to the reward, and then you could shape irrational superstitious behaviors. Now that work has turned out to be a bit dubious and I’m not sure that stood the test of time. But in terms of people’s rituals and routines, it’s quite clear and I know them in myself. There are these things that we do which are familiar, and we get a little bit irritated we don’t get to do them, so we do, most of us, entertain some degree of superstitious behavior.

At the time there was a lot of interest in religion and a lot of the hoo-ha about The God Delusion, and I felt that maybe we just need to redress this idea that it’s all to do with indoctrination, because I couldn’t believe the whole edifice of this kind of belief system was purely indoctrination. I’m not saying there’s not indoctrination, and clearly, religions are culturally transmitted. You’re not born to be Jewish or born to be Christian. But what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination.

We took a lot of the work that Paul Rozin had done, talking about things like killers’ cardigans, and we started to see if there was any empirical measures of transfer. For example, would you find yourself wanting to wash your hands more? Would you find priming effects for words which were related to good and evil, based on whether you had touched the object or not? For me there had to be this issue of physical contact. It struck me as this was why it wasn’t a pure association mechanism. It was actually something to do with the belief, a naïve belief there was some biological entity that can somehow, moral contamination can transfer.

We started to look at, actually not children now, but looking at adults because doing this sort of work with children is very difficult and probably somewhat controversial. But the whole area of research is premised on this idea that there are intuitive ways of seeing the world. Sometimes this is referred to as System One and System Two, or automatic and control. It reappears in a variety of psychological contexts. I just think about it as these unconscious, rapid systems which are triggered automatically. I think their origins are in children. Whilst you can educate people with a kind of slower System Two, if you like, you never eradicate the intuitive ways of seeing the world because they were never taught in the first place. They’re always there. I suppose if you want to ask me if there any kind of thing that you can have as a theory that you haven’t yet proven, it’s the idea is, I don’t think you ever throw away any belief system or any ideas that have been derived through these unconscious intuitive processes. You can supersede them, you can overwrite them, but they never go away, and they will reemerge under the right contexts. If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of ways of thinking. The empirical evidence seems to be supporting that. They’ve got wrinkles in their brains. They’re never going to go away. You can try and override them, but they’re always there and they will reappear under the right circumstances, which is why you see the reemergence under stress of a lot of irrational thinking.

For example, teleological explanations, the idea that everything is made for a purpose or a function, is a natural way to see the world. This is Deb Kelemen's work. You will find that people who considered themselves fairly rational and well educated will, nevertheless, default back to teleological explanations if you put them under a stressful timed kind of situation. So it’s a way of seeing the world that is never eradicated. I think that’s going to be a general principle, in the same way that a reflex, if you think about reflexes, that’s an unlearned behavioral response. You’re born with a whole set of reflexes. Many of them disappear, but they never entirely go away. They become typically reintegrated into more complex behaviors, but if someone goes into a coma, you can see the reflexes reemerging.

What we think is going on is that in the course of development, these very automatic behaviors become controlled by top-down processes from the cortex, all these higher order systems which are regulating and controlling and suppressing, trying to keep these things under wraps. But when the cortex is put out of action through a coma or head injury, then you can see many of these things reemerging again. I don’t see why there should be any point of departure from a motor system to a perceptual system, to a cognitive system, because they’re all basically patterns of neural firing in the brain, and so I don’t see why it can’t be the case that if concepts are derived through these processes, they could remain dormant and latent as well.

The hierarchy of representations in the brain

One of the things that has been fascinating me is the extent to which we can talk about the hierarchy of representations in the brain. Representations are literally re-presentations. That’s the language of the brain, that’s the mode of thinking in the brain, it’s representation. It’s more than likely, in fact, it’s most likely that there is already representation wired into the brain. If you think about the sensory systems, the array of the eye, for example, is already laid out in a topographical representation of the external world, to which it has not yet been exposed. What happens is that this is general layout, arrangements that become fine-tuned. We know of a lot of work to show that the arrangements of the sensory mechanisms do have a spatial arrangement, so that’s not learned in any sense. But these can become changed through experiences, and that’s why the early work of Hubel and Weisel, about the effects of abnormal environments showed that the general pattern could be distorted, but the pattern was already in place in the first place.

When you start to move beyond sensory into perceptual systems and then into cognitive systems, that’s when you get into theoretical arguments and the gloves come off. There are some people who argue that it has to be the case that there are certain primitives built into the conceptual systems. I’m talking about the work of, most notably, Elizabeth Spelke.  

There certainly seems to be a lot of perceptual ability in newborns in terms of constancies, noticing invariant aspects of the physical world. I don’t think I have a problem with any of that, but I suppose this is where the debates go. (…)

Shame in the East is something that is at least recognized as a major factor of identity

I’ve been to Japan a couple of time. I’m not an expert in the cultural variation of cognition, but clearly shame is a major factor in motivation, or avoidance of shame, in eastern cultures. I think it reflects the sense of self worth and value in eastern culture. It is very much a collective notion that they place a lot of emphasis on not letting the team down. I believe they even have a special word for that aspect or experience of shame that we don’t have. That doesn’t mean that it’s a concept that we can never entertain, but it does suggest that in the East this is something that is at least recognized as a major factor of identity.

Children don’t necessarily feel shame. I don’t think they’ve got a sense of self until well into their second year. They have the “I”, they have the notion of being, of having control. They will experience the willingness to move their arms, and I’m sure they make that connection very quickly, so they have this sense of self, in that “I” notion, but I don’t think they’ve got personal identity, and that’s one of the reasons that they don’t have much, or very few of us have much memory of our earlier times. Our episodic memories are very fragmented, sensory events. But from about two to three years on they start to get a sense of who they are. Knowing who you are means becoming integrated into your social environment, and part of becoming integrated into your social environment means acquiring a sense of shame. Below two, three years of age, I don’t think many children have a notion of shame. But from then on, as they have to become members of the social tribe, then they have to be made aware of the consequences of being antisocial or doing things not what’s expected of them. I think that’s probably late in the acquisition.”

Bruce Hood, Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, Essentialism, Edge, May, 17, 2012. (Illustration source)

The Illusion of the Self

"For me, an illusion is a subjective experience that is not what it seems. Illusions are experiences in the mind, but they are not out there in nature. Rather, they are events generated by the brain. Most of us have an experience of a self. I certainly have one, and I do not doubt that others do as well – an autonomous individual with a coherent identity and sense of free will. But that experience is an illusion – it does not exist independently of the person having the experience, and it is certainly not what it seems. That’s not to say that the illusion is pointless. Experiencing a self illusion may have tangible functional benefits in the way we think and act, but that does not mean that it exists as an entity. (…)

For most of us, the sense of our self is as an integrated individual inhabiting a body. I think it is helpful to distinguish between the two ways of thinking about the self that William James talked about. There is conscious awareness of the present moment that he called the “I,” but there is also a self that reflects upon who we are in terms of our history, our current activities and our future plans. James called this aspect of the self, “me” which most of us would recognize as our personal identity—who we think we are. However, I think that both the “I” and the “me” are actually ever-changing narratives generated by our brain to provide a coherent framework to organize the output of all the factors that contribute to our thoughts and behaviors.

I think it helps to compare the experience of self to subjective contours – illusions such as the Kanizsa pattern where you see an invisible shape that is really defined entirely by the surrounding context. People understand that it is a trick of the mind but what they may not appreciate is that the brain is actually generating the neural activation as if the illusory shape was really there. In other words, the brain is hallucinating the experience. There are now many studies revealing that illusions generate brain activity as if they existed. They are not real but the brain treats them as if they were.

Now that line of reasoning could be applied to all perception except that not all perception is an illusion. There are real shapes out there in the world and other physical regularities that generate reliable states in the minds of others. The reason that the status of reality cannot be applied to the self, is that it does not exist independently of my brain alone that is having the experience. It may appear to have a consistency of regularity and stability that makes it seem real, but those properties alone do not make it so.

Similar ideas about the self can be found in Buddhism and the writings of Hume and Spinoza. The difference is that there is now good psychological and physiological evidence to support these ideas that I cover in the book. (…)

There are many cognitive scientists who would doubt that the experience of I is constructed from a multitude of unconscious mechanisms and processes. Me is similarly constructed, though we may be more aware of the events that have shaped it over our lifetime. But neither is cast in stone and both are open to all manner of reinterpretation. As artists, illusionists, movie makers, and more recently experimental psychologists have repeatedly shown, conscious experience is highly manipulatable and context dependent. Our memories are also largely abstracted reinterpretations of events – we all hold distorted memories of past experiences. (…)

The developmental processes that shape our brains from infancy onwards to create our identities as well as the systematic biases that distort the content of our identity to form a consistent narrative. I believe much of that distortion and bias is socially relevant in terms of how we would like to be seen by others. We all think we would act and behave in a certain way, but the reality is that we are often mistaken. (…)

Q: What role do you think childhood plays in shaping the self?

Just about everything we value in life has something to do with other people. Much of that influence occurs early in our development, which is one reason why human childhoods are so prolonged in comparison to other species. We invest so much effort and time into our children to pass on as much knowledge and experience as possible. It is worth noting that other species that have long periods of rearing also tend to be more social and intelligent in terms of flexible, adaptive behaviors. Babies are born social from the start but they develop their sense of self throughout childhood as they move to become independent adults that eventually reproduce. I would contend that the self continues to develop throughout a lifetime, especially as our roles change to accommodate others. (…)

The role of social networking in the way we portray our self

There are some interesting phenomena emerging. There is evidence of homophily – the grouping together of individuals who share a common perspective, which is not too surprising. More interesting is evidence of polarization. Rather than opening up and exposing us to different perspectives, social networking on the Internet can foster more radicalization as we seek out others who share our positions. The more others validate our opinions, the more extreme we become. I don’t think we need to be fearful, and I am less concerned than the prophets of doom who predict the downfall of human civilization, but I believe it is true that the way we create the narrative of the self is changing.

Q: If the self is an illusion, what is your position on free will?

Free will is certainly a major component of the self illusion, but it is not synonymous. Both are illusions, but the self illusion extends beyond the issues of choice and culpability to other realms of human experience. From what I understand, I think you and I share the same basic position about the logical impossibility of free will. I also think that compatibilism (that determinism and free will can co-exist) is incoherent. We certainly have more choices today to do things that are not in accord with our biology, and it may be true that we should talk about free will in a meaningful way, as Dennett has argued, but that seems irrelevant to the central problem of positing an entity that can make choices independently of the multitude of factors that control a decision. To me, the problem of free will is a logical impasse – we cannot choose the factors that ultimately influence what we do and think. That does not mean that we throw away the social, moral, and legal rulebooks, but we need to be vigilant about the way our attitudes about individuals will be challenged as we come to understand the factors (both material and psychological) that control our behaviors when it comes to attributing praise and blame. I believe this is somewhat akin to your position. (…)

The self illusion explains so many aspects of human behavior as well as our attitudes toward others. When we judge others, we consider them responsible for their actions. But was Mary Bale, the bank worker from Coventry who was caught on video dropping a cat into a garbage can, being true to her self? Or was Mel Gibson’s drunken anti-Semitic rant being himself or under the influence of someone else? What motivated Senator Weiner to text naked pictures of himself to women he did not know? In the book, I consider some of the extremes of human behavior from mass murderers with brain tumors that may have made them kill, to rising politicians who self-destruct. By rejecting the notion of a core self and considering how we are a multitude of competing urges and impulses, I think it is easier to understand why we suddenly go off the rails. It explains why we act, often unconsciously, in a way that is inconsistent with our self image – or the image of our self as we believe others see us.

That said, the self illusion is probably an inescapable experience we need for interacting with others and the world, and indeed we cannot readily abandon or ignore its influence, but we should be skeptical that each of us is the coherent, integrated entity we assume we are.

Bruce Hood Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, interviewed by Sam Harris, The Illusion of the Self, Sam Harris blog, May 22, 2012.

See also:

Existence: What is the self?, Lapidarium notes
Paul King on what is the best explanation for identity
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

Apr
29th
Sun
permalink

The time machine in our mind. The imagistic mental machinery that allows us to travel through time

            

Our ability to close our eyes and imagine the pleasures of Super Bowl Sunday or remember the excesses of New Year’s Eve is a fairly recent evolutionary development, and our talent for doing this is unparalleled in the animal kingdom. We are a race of time travelers, unfettered by chronology and capable of visiting the future or revisiting the past whenever we wish. If our neural time machines are damaged by illness, age or accident, we may become trapped in the present. (…)

Why did evolution design our brains to go wandering in time? Perhaps it’s because an experience is a terrible thing to waste. Moving around in the world exposes organisms to danger, so as a rule they should have as few experiences as possible and learn as much from each as they can. (…)

Time travel allows us to pay for an experience once and then have it again and again at no additional charge, learning new lessons with each repetition. When we are busy having experiences—herding children, signing checks, battling traffic—the dark network is silent, but as soon as those experiences are over, the network is awakened, and we begin moving across the landscape of our history to see what we can learn—for free.

Animals learn by trial and error, and the smarter they are, the fewer trials they need. Traveling backward buys us many trials for the price of one, but traveling forward allows us to dispense with trials entirely. Just as pilots practice flying in flight simulators, the rest of us practice living in life simulators, and our ability to simulate future courses of action and preview their consequences enables us to learn from mistakes without making them.

We don’t need to bake a liver cupcake to find out that it is a stunningly bad idea; simply imagining it is punishment enough. The same is true for insulting the boss and misplacing the children. We may not heed the warnings that prospection provides, but at least we aren’t surprised when we wake up with a hangover or when our waists and our inseams swap sizes. (…)

Perhaps the most startling fact about the dark network isn’t what it does but how often it does it. Neuroscientists refer to it as the brain’s default mode, which is to say that we spend more of our time away from the present than in it. People typically overestimate how often they are in the moment because they rarely take notice when they take leave. It is only when the environment demands our attention—a dog barks, a child cries, a telephone rings—that our mental time machines switch themselves off and deposit us with a bump in the here and now. We stay just long enough to take a message and then we slip off again to the land of Elsewhen, our dark networks awash in light.”

Daniel Gilbert, Professor of Psychology at Harvard University, Essay: The Brain: Time Travel in the Brain, TIME, Jan. 29, 2007. (Illustration for TIME by Jeffery Fischer).

Kurt Stocker: The time machine in our mind (2012)

                                            
                                          (Click image to open research paper in pdf)

Abstract:

"This article provides the first comprehensive conceptual account for the imagistic mental machinery that allows us to travel through time—for the time machine in our mind. It is argued that language reveals this imagistic machine and how we use it. Findings from a range of cognitive fields are theoretically unified and a recent proposal about spatialized mental time travel is elaborated on. The following novel distinctions are offered: external vs. internal viewing of time; “watching” time vs. projective “travel” through time; optional vs. obligatory mental time travel; mental time travel into anteriority or posteriority vs. mental time travel into the past or future; single mental time travel vs. nested dual mental time travel; mental time travel in episodic memory vs. mental time travel in semantic memory; and “seeing” vs. “sensing” mental imagery. Theoretical, empirical, and applied implications are discussed.”

"The theoretical strategy I adopt is to use language as an entree to a conceptual level that seems deeper than language itself (Pinker, 2007; Talmy, 2000). The logic of this strategy is in accordance with recent findings that many conceptualizations observed in language have also been found to exist in mental representations that are more basic than language itself. (…)

It is proposed that this strategy helps to uncover an imagistic mental machinery that allows us to travel through time—that this strategy helps us to uncover the time machine in our mind.

A central term used in this article is “the imagery structuring of time.” By this I refer to an invisible spatial scaffolding in our mental imagery across which temporal material can be splayed, the existence of which will be proposed in this article. At times it will be quite natural to assume that a space-to-time mapping in the sense of conceptual metaphor theory is involved in the structuring of this invisible scaffolding. (…)

It is thus for the present investigation more coherent to assume that mental time is basically constructed out of “spatialized” mental imagery—“spatialized” is another central term that I use in this article. I use it in the sense that it is neutral as to whether some of the imagery might be transferred via space-to-time mappings or whether some of the imagery might relate to space-to-time mappings only in an etymological sense. An example of temporal constructions that are readily characterized in terms of spatialized temporal imagery structuring are the conceptualizations underlying the use of before and after, conceptualizations that are often treated as having autonomous temporal status and as relating only etymologically to space.

The current investigation can refine this view somewhat, by postulating that spatialized temporal structures still play a very vital role in the imagery structuring underlying before and after. (…)

The theoretical strategy, to use linguistic expressions about time as an entree to conceptual structures about time that seem deeper than language itself, has been applied quite fruitfully, since it has allowed for the development of a rather comprehensive and precise conceptual account of the time machine in our mind. The theory is not an ad-hoc theory, since linguistic conceptualizations cannot be interpreted in a totally arbitrary way—for example language does not allow us to assume that a sentence such as I shopped at the store before I went home means that first the going home took place and then the shopping. In this respect the theory is to some degree already a data-guided theory, since linguistic expressions are data. However, the proposal of the theory that language has helped us to uncover a specific system of spatialized imagery structuring of time can only be evaluated by carrying out corresponding psychological (cognitive and neurocognitive) experiments and some ideas for such experiments have been presented. Since the time machine in our mind is a deeply fascinating apparatus, I am confident that theoretical and empirical investigations will continue to explore it.”

— Kurt Stocker, The time machine in our mind (pdf), Institute of Cognitive and Brain Sciences, University of California, Berkeley, CA, USA, 2012

See also:

☞ T. Suddendorf, D. Rose Addis and M C. Corballis, Mental time travel and the shaping of the human mind (pdf), The Royal Society, 2009.

Abstract: “Episodic memory, enabling conscious recollection of past episodes, can be distinguished from semantic memory, which stores enduring facts about the world. Episodic memory shares a core neural network with the simulation of future episodes, enabling mental time travel into both the past and the future. The notion that there might be something distinctly human about mental time travel has provoked ingenious attempts to demonstrate episodic memory or future simulation in nonhuman animals, but we argue that they have not yet established a capacity comparable to the human faculty. The evolution of the capacity to simulate possible future events, based on episodic memory, enhanced fitness by enabling action in preparation of different possible scenarios that increased present or future survival and reproduction chances. Human language may have evolved in the first instance for the sharing of past and planned future events, and, indeed, fictional ones, further enhancing fitness in social settings.”

☞ George Lakoff, Mark Johnson, Conceptual Metaphor in Everyday Language (pdf), The Journal of Philosophy, Vol 77, 1980.
Our sense of time is deeply entangled with memory
Time tag on Lapidarium notes

Apr
15th
Sun
permalink

How liberal and conservative brains are wired differently. Liberals and conservatives don’t just vote differently, they think differently

           

"There’s now a large body of evidence showing that those who opt for the political left and those who opt for the political right tend to process information in divergent ways and to differ on any number of psychological traits.

Perhaps most important, liberals consistently score higher on a personality measure called “openness to experience,” one of the “Big Five” personality traits, which are easily assessed through standard questionnaires. That means liberals tend to be the kind of people who want to try new things, including new music, books, restaurants and vacation spots — and new ideas.

“Open people everywhere tend to have more liberal values,” said psychologist Robert McCrae, who conducted voluminous studies on personality while at the National Institute on Aging at the National Institutes of Health.

Conservatives, in contrast, tend to be less open — less exploratory, less in need of change — and more “conscientious,” a trait that indicates they appreciate order and structure in their lives. This gels nicely with the standard definition of conservatism as resistance to change — in the famous words of William F. Buckley Jr., a desire to stand “athwart history, yelling ‘Stop!’ ” (…)

We see the consequences of liberal openness and conservative conscientiousness everywhere — and especially in the political battle over facts. (…)

Compare this with a different irrationality: refusing to admit that humans are a product of evolution, a chief point of denial for the religious right. In a recent poll, just 43 percent of tea party adherents accepted the established science here. Yet unlike the vaccine issue, this denial is anything but new and trendy; it is well over 100 years old. The state of Tennessee is even hearkening back to the days of the Scopes “Monkey” Trial, more than 85 years ago. It just passed a bill that will weaken the teaching of evolution.

Such are some of the probable consequences of openness, or the lack thereof. (…)

Now consider another related trait implicated in our divide over reality: the “need for cognitive closure.” This describes discomfort with uncertainty and a desire to resolve it into a firm belief. Someone with a high need for closure tends to seize on a piece of information that dispels doubt or ambiguity, and then freeze, refusing to consider new information. Those who have this trait can also be expected to spend less time processing information than those who are driven by different motivations, such as achieving accuracy.

A number of studies show that conservatives tend to have a greater need for closure than do liberals, which is precisely what you would expect in light of the strong relationship between liberalism and openness. “The finding is very robust,” explained Arie Kruglanski, a University of Maryland psychologist who has pioneered research in this area and worked to develop a scale for measuring the need for closure.

The trait is assessed based on responses to survey statements such as “I dislike questions which could be answered in many different ways” and “In most social conflicts, I can easily see which side is right and which is wrong.” (…)

Anti-evolutionists have been found to score higher on the need for closure. And in the global-warming debate, tea party followers not only strongly deny the science but also tend to say that they “do not need any more information” about the issue.

I’m not saying that liberals have a monopoly on truth. Of course not. They aren’t always right; but when they’re wrong, they are wrong differently.

When you combine key psychological traits with divergent streams of information from the left and the right, you get a world where there is no truth that we all agree upon. We wield different facts, and hold them close, because we truly experience things differently. (…)”

Chris Mooney, science and political journalist, author of four books, including the New York Times bestselling The Republican War on Science and the forthcoming The Republican Brain: The Science of Why They Deny Science and Reality (April 2012), Liberals and conservatives don’t just vote differently. They think differently, The Washington Post, April 13, 2012. (Illustration: Koren Shadmi for The Washington Post)

See also:

Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
Cognitive and Social Consequences of the Need for Cognitive Closure, European Review of Social Psychology
☞ Antonio Chirumbolo, The relationship between need for cognitiveclosure and political orientation: the mediating role of authoritarianism, Department of Social and Developmental Psychology, University of Rome ‘La Sapienza’
Paul Nurse, Stamp out anti-science in US politics, New Scientist, 14 Sept 2011
☞ Chris Mooney, Why Republicans Deny Science: The Quest for a Scientific Explanation, The Huffington Post, Jan 11, 2012
☞ John Allen Paulos, Why Don’t Americans Elect Scientists?, NYTimes, Feb 13, 2012.
Study: Conservatives’ Trust in Science Has Fallen Dramatically Since Mid-1970s, American Sociological Association, March 29, 2012.
Why people believe in strange things, Lapidarium notes

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Mar
7th
Wed
permalink

Is The World An Idea?

                              
                                                  Plato, Hulton Archive/Getty Images

Plato was one that made the divide between the world of ideas and the world of the senses explicit. In his famous Allegory of the Cave, he imagined a group of prisoners who had been chained to a cave all their lives; all they could see were shadows projected on a wall, which they conceived as their reality. Unbeknownst to them, a fire behind them illuminated objects and created the shadows they saw, which could be manipulated to deceive them. In contrast, the philosopher could see reality as it truly is, a manifestation of ideas freed from the deception of the senses. In other words, if we want to understand the true nature of reality, we shouldn’t rely on our senses; only ideas are truly pure, freed from the distortions caused by our limited perception of reality.

Plato thus elevated the human mind to a god-like status, given that it can find truth through reason, in particular through the rational construction of ideal “Forms,” which are the essence of all objects we see in reality. For example, all tables share the Form of “tableness,” even if every table is different. The Form is an ideal and, thus, a blueprint of perfection. If I ask you to imagine a circle, the image of a circle you hold in your head is the only perfect circle: any representation of that circle, on paper or on a blackboard, will be imperfect. To Plato, intelligence was the ability to grasp the world of Forms and thus come closer to truth.

Due to its connection with the search for truth, it’s no surprise that Plato’s ideas influenced both scientists and theologians. If the world is made out of Forms, say geometrical forms, reality may be described mathematically, combining the essential forms and their relations to describe the change we see in the world. Thus, by focusing on the essential elements of reality as mathematical objects and their relations we could, perhaps, grasp the ultimate nature of reality and so come closer to timeless truths.

The notion that mathematics is a portal to final truths holds tremendous intellectual appeal and has influenced some of the greatest names in the history of science, from Copernicus, Kepler, Newton, and Einstein to many present-day physicists searching for a final theory of nature based upon a geometrical scaffolding, such as superstring theories. (…)

Taken in context, we can see where modern scientific ideas that relate the ultimate nature of reality to geometry come from. If it’s not God the Geometer anymore, Man the Geometer persists. That this vision offers a major drive to human creativity is undeniable.

We do imagine the universe in our minds, with our minds, and many scientific successes are a byproduct of this vision. Perhaps we should take Nicholas of Cusa,’s advice to heart and remember that whatever we achieve with our minds will be an expression of our own creativity, having little or nothing to do with ultimate truths.”

Marcelo Gleiser, Brazilian Professor of Natural Philosophy, Physics and Astronomy at Dartmouth College, USA, Is The World An Idea?, NPR, March 7, 2012.

See also:

Cognition, perception, relativity tag on Lapidarium notes

Jan
14th
Sat
permalink

What are memories made of?


“There appears to be no single memory store, but instead a diverse taxonomy of memory systems, each with its own special circuitry evolved to package and retrieve that type of memory. Memories are not static entities; over time they shift and migrate between different territories of the brain.

At the top of the taxonomical tree, a split occurs between declarative and non-declarative memories. Declarative memories are those you can state as true or false, such as remembering whether you rode a bicycle to work. Non-declarative memories are those that cannot be described as true or false, such as knowing how to ride a bicycle. A central hub in the declarative memory system is a brain region called the hippocampus. This undulating, twisted structure gets its name from its resemblance to a sea horse. Destruction of the hippocampus, through injury, neurosurgery or the ravages of Alzheimer’s disease, can result in an amnesia so severe that no events experienced after the damage can be remembered. (…)

A popular view is that during sleep your hippocampus “broadcasts” its recently captured memories to the neocortex, which updates your long-term store of past experience and knowledge. Eventually the neocortex is sufficient to support recall without relying on the hippocampus. However, there is evidence that if you need to vividly picture a scene in your mind, this appears to require the hippocampus, no matter how old the memory. We have recently discovered that the hippocampus is not only needed to reimagine the past, but also to imagine the future.

Pattern completion

Studying patients has taught us where memories might be stored, but not what physically constitutes a memory. The answer lies in the multitude of tiny modifiable connections between neuronal cells, the information-processing units of the brain. These cells, with their wispy tree-like protrusions, hang like stars in miniature galaxies and pulse with electrical charge. Thus, your memories are patterns inscribed in the connections between the millions of neurons in your brain. Each memory has its unique pattern of activity, logged in the vast cellular network every time a memory is formed.

It is thought that during recall of past events the original activity pattern in the hippocampus is re-established via a process that is known as “pattern completion”. During this process, the initial activity of the cells is incoherent, but via repeated reactivation the activity pattern is pieced together until the original pattern is complete. Memory retention is helped by the presence of two important molecules in our brain: dopamine and acetylcholine. Both help the neurons improve their ability to lay down memories in their connections. Sometimes, however, the system fails, leaving us unable to bring elements of the past to mind.

Of all the things we need to remember, one of the most essential is where we are. Becoming lost is debilitating and potentially terrifying. Within the hippocampus, and neighbouring brain structures, neurons exist that allow us to map space and find our way through it.Place cells" provide an internal map of space; "head-direction cell" signal the direction we are facing, similar to an internal compass; and "grid cells" chart out space in a manner akin to latitude and longitude.

For licensed London taxi drivers, it appears that navigating the labyrinth of London’s streets on a daily basis causes the density of grey matter in their posterior hippocampus to increase. Thus, the physical structure of your brain is malleable, depending on what you learn.

With impressive technical advances such as optogenetics, in which light beams excite or silence targeted groups of neurons, scientists are beginning to control memories at an unprecedented level.”

Hugo Spiers is a neuroscientist and lecturer at the institute of behavioural neuroscience at University College London, What are memories made of?, The Guardian, Jan 14, 2012 (Illustration: Polly Becker)

How and why memories change

"Since the time of the ancient Greeks, people have imagined memories to be a stable form of information that persists reliably. The metaphors for this persistence have changed over time—Plato compared our recollections to impressions in a wax tablet, and the idea of a biological hard drive is popular today—but the basic model has not. Once a memory is formed, we assume that it will stay the same. This, in fact, is why we trust our recollections. They feel like indelible portraits of the past.

None of this is true. In the past decade, scientists have come to realize that our memories are not inert packets of data and they don’t remain constant. Even though every memory feels like an honest representation, that sense of authenticity is the biggest lie of all. (…)

New research is showing that every time we recall an event, the structure of that memory in the brain is altered in light of the present moment, warped by our current feelings and knowledge. (…)

This new model of memory isn’t just a theory—neuroscientists actually have a molecular explanation of how and why memories change. In fact, their definition of memory has broadened to encompass not only the cliché cinematic scenes from childhood but also the persisting mental loops of illnesses like PTSD and addiction—and even pain disorders like neuropathy. Unlike most brain research, the field of memory has actually developed simpler explanations. Whenever the brain wants to retain something, it relies on just a handful of chemicals. Even more startling, an equally small family of compounds could turn out to be a universal eraser of history, a pill that we could take whenever we wanted to forget anything. (…)

How memory is formed

Every memory begins as a changed set of connections among cells in the brain. If you happen to remember this moment—the content of this sentence—it’s because a network of neurons has been altered, woven more tightly together within a vast electrical fabric. This linkage is literal: For a memory to exist, these scattered cells must become more sensitive to the activity of the others, so that if one cell fires, the rest of the circuit lights up as well.

Scientists refer to this process as long-term potentiation, and it involves an intricate cascade of gene activations and protein synthesis that makes it easier for these neurons to pass along their electrical excitement. Sometimes this requires the addition of new receptors at the dendritic end of a neuron, or an increase in the release of the chemical neurotransmitters that nerve cells use to communicate. Neurons will actually sprout new ion channels along their length, allowing them to generate more voltage. Collectively this creation of long-term potentiation is called the consolidation phase, when the circuit of cells representing a memory is first linked together. Regardless of the molecular details, it’s clear that even minor memories require major work. The past has to be wired into your hardware. (…)

What happens after a memory is formed, when we attempt to access it?

The secret was the timing: If new proteins couldn’t be created during the act of remembering, then the original memory ceased to exist. The erasure was also exceedingly specific. (…) They forgot only what they’d been forced to remember while under the influence of the protein inhibitor.

The disappearance of the fear memory suggested that every time we think about the past we are delicately transforming its cellular representation in the brain, changing its underlying neural circuitry. It was a stunning discovery: Memories are not formed and then pristinely maintained, as neuroscientists thought; they are formed and then rebuilt every time they’re accessed. “The brain isn’t interested in having a perfect set of memories about the past,” LeDoux says. “Instead, memory comes with a natural updating mechanism, which is how we make sure that the information taking up valuable space inside our head is still useful. That might make our memories less accurate, but it probably also makes them more relevant to the future.” (…)

[Donald] Lewis had discovered what came to be called memory reconsolidation, the brain’s practice of re-creating memories over and over again. (…)

The science of reconsolidation suggests that the memory is less stable and trustworthy than it appears. Whenever I remember the party, I re-create the memory and alter its map of neural connections. Some details are reinforcedmy current hunger makes me focus on the ice cream—while others get erased, like the face of a friend whose name I can no longer conjure. The memory is less like a movie, a permanent emulsion of chemicals on celluloid, and more like a play—subtly different each time it’s performed. In my brain, a network of cells is constantly being reconsolidated, rewritten, remade. That two-letter prefix changes everything. (…)

Once you start questioning the reality of memory, things fall apart pretty quickly. So many of our assumptions about the human mind—what it is, why it breaks, and how it can be healed—are rooted in a mistaken belief about how experience is stored in the brain. (According to a recent survey, 63 percent of Americans believe that human memory “works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later.”) We want the past to persist, because the past gives us permanence. It tells us who we are and where we belong. But what if your most cherished recollections are also the most ephemeral thing in your head? (…)

Reconsolidation provides a mechanistic explanation for these errors. It’s why eyewitness testimony shouldn’t be trusted (even though it’s central to our justice system), why every memoir should be classified as fiction, and why it’s so disturbingly easy to implant false recollections. (The psychologist Elizabeth Loftus has repeatedly demonstrated that nearly a third of subjects can be tricked into claiming a made-up memory as their own. It takes only a single exposure to a new fiction for it to be reconsolidated as fact.) (…)

When we experience a traumatic event, it gets remembered in two separate ways. The first memory is the event itself, that cinematic scene we can replay at will. The second memory, however, consists entirely of the emotion, the negative feelings triggered by what happened. Every memory is actually kept in many different parts of the brain. Memories of negative emotions, for instance, are stored in the amygdala, an almond-shaped area in the center of the brain. (Patients who have suffered damage to the amygdala are incapable of remembering fear.) By contrast, all the relevant details that comprise the scene are kept in various sensory areas—visual elements in the visual cortex, auditory elements in the auditory cortex, and so on. That filing system means that different aspects can be influenced independently by reconsolidation.

The larger lesson is that because our memories are formed by the act of remembering them, controlling the conditions under which they are recalled can actually change their content. (…)

The chemistry of the brain is in constant flux, with the typical neural protein lasting anywhere from two weeks to a few months before it breaks down or gets reabsorbed. How then do some of our memories seem to last forever? It’s as if they are sturdier than the mind itself. Scientists have narrowed down the list of molecules that seem essential to the creation of long-term memory—sea slugs and mice without these compounds are total amnesiacs—but until recently nobody knew how they worked. (…)

A form of protein kinase C called PKMzeta hangs around synapses, the junctions where neurons connect, for an unusually long time. (…) What does PKMzeta do? The molecule’s crucial trick is that it increases the density of a particular type of sensor called an AMPA receptor on the outside of a neuron. It’s an ion channel, a gateway to the interior of a cell that, when opened, makes it easier for adjacent cells to excite one another. (While neurons are normally shy strangers, struggling to interact, PKMzeta turns them into intimate friends, happy to exchange all sorts of incidental information.) This process requires constant upkeep—every long-term memory is always on the verge of vanishing. As a result, even a brief interruption of PKMzeta activity can dismantle the function of a steadfast circuit. (…)

Because of the compartmentalization of memory in the brain—the storage of different aspects of a memory in different areas—the careful application of PKMzeta synthesis inhibitors and other chemicals that interfere with reconsolidation should allow scientists to selectively delete aspects of a memory. (…)

The astonishing power of PKMzeta forces us to redefine human memory. While we typically think of memories as those facts and events from the past that stick in the brain, Sacktor’s research suggests that memory is actually much bigger and stranger than that. (…)

Being able to control memory doesn’t simply give us admin access to our brains. It gives us the power to shape nearly every aspect of our lives. There’s something terrifying about this. Long ago, humans accepted the uncontrollable nature of memory; we can’t choose what to remember or forget. But now it appears that we’ll soon gain the ability to alter our sense of the past. (…)

The fact is we already tweak our memories—we just do it badly. Reconsolidation constantly alters our recollections, as we rehearse nostalgias and suppress pain. We repeat stories until they’re stale, rewrite history in favor of the winners, and tamp down our sorrows with whiskey. “Once people realize how memory actually works, a lot of these beliefs that memory shouldn’t be changed will seem a little ridiculous,” Nader says. “Anything can change memory. This technology isn’t new. It’s just a better version of an existing biological process.” (…)

Jonah Lehrer, American author and journalist, The Forgetting Pill Erases Painful Memories Forever, Wired Magazine, Feb 17, 2012. (Third illustration: Dwight Eschliman)

"You could double the number of synaptic connections in a very simple neurocircuit as a result of experience and learning. The reason for that was that long-term memory alters the expression of genes in nerve cells, which is the cause of the growth of new synaptic connections. When you see that at the cellular level, you realize that the brain can change because of experience. It gives you a different feeling about how nature and nurture interact. They are not separate processes.”

Eric R. Kandel, American neuropsychiatrist, Nobel Prize laureate, A Quest to Understand How Memory Works, NYT, March 5, 2012

Prof. Eric Kandel: We Are What We Remember - Memory and Biology

Eric R. Kandel, American neuropsychiatrist, Nobel Prize laureate, We Are What We Remember: Memory and Biology, FORA.tv, Prohansky Auditorium New York, NY, Mar 28.2011

See also:

☞ Eric R. Kandel, The Biology of Memory: A Forty-Year Perspective (pdf), Department of Neuroscience, Columbia University, New York, 2009
☞ Eric R. Kandel, A Biological Basis for the Unconscious?, Eric Kandel: “I want to know where the id, the ego, and the super-ego are located in the brain” | Big Think video Apr 1, 2012.
Memory tag on Lapidarium notes

Jan
13th
Fri
permalink

A risk-perception: What You Don’t Know Can Kill You

"Humans have a perplexing 
tendency to fear rare threats such as shark attacks while blithely 
ignoring far greater risks like 
unsafe sex and an unhealthy diet. Those illusions are not just 
silly—they make the world a more dangerous place. (…)

We like to think that humans are supremely logical, making decisions on the basis of hard data and not on whim. For a good part of the 19th and 20th centuries, economists and social scientists assumed this was true too. The public, they believed, would make rational decisions if only it had the right pie chart or statistical table. But in the late 1960s and early 1970s, that vision of homo economicus—a person who acts in his or her best interest when given accurate information—was knee­capped by researchers investigating the emerging field of risk perception. What they found, and what they have continued teasing out since the early 1970s, is that humans have a hell of a time accurately gauging risk. Not only do we have two different systems—logic and instinct, or the head and the gut—that sometimes give us conflicting advice, but we are also at the mercy of deep-seated emotional associations and mental shortcuts. (…)

Our hardwired gut reactions developed in a world full of hungry beasts and warring clans, where they served important functions. Letting the amygdala (part of the brain’s emotional core) take over at the first sign of danger, milliseconds before the neocortex (the thinking part of the brain) was aware a spear was headed for our chest, was probably a very useful adaptation. Even today those nano-pauses and gut responses save us from getting flattened by buses or dropping a brick on our toes. But in a world where risks are presented in parts-per-billion statistics or as clicks on a Geiger counter, our amygdala is out of its depth.

A risk-perception apparatus permanently tuned for avoiding mountain lions makes it unlikely that we will ever run screaming from a plate of fatty mac ’n’ cheese. “People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level,” says Carnegie Mellon University researcher George Loewenstein, whose seminal 2001 paper, “Risk as Feelings,” debunked theories that decision making in the face of risk or uncertainty relies largely on reason. “Types of stimuli that people are evolutionarily prepared to fear, such as caged spiders, snakes, or heights, evoke a visceral response even when, at a cognitive level, they are recognized to be harmless,” he says. Even Charles Darwin failed to break the amygdala’s iron grip on risk perception. As an experiment, he placed his face up against the puff adder enclosure at the London Zoo and tried to keep himself from flinching when the snake struck the plate glass. He failed.

The result is that we focus on the one-in-a-million bogeyman while virtually ignoring the true risks that inhabit our world. News coverage of a shark attack can clear beaches all over the country, even though sharks kill a grand total of about one American annually, on average. That is less than the death count from cattle, which gore or stomp 20 Americans per year. Drowning, on the other hand, takes 3,400 lives a year, without a single frenzied call for mandatory life vests to stop the carnage. A whole industry has boomed around conquering the fear of flying, but while we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities each year.

In short, our risk perception is often at direct odds with reality. All those people bidding up the cost of iodide? They would have been better off spending $10 on a radon testing kit. The colorless, odorless, radioactive gas, which forms as a by-product of natural uranium decay in rocks, builds up in homes, causing lung cancer. According to the Environmental Protection Agency, radon exposure kills 21,000 Americans annually.

David Ropeik, a consultant in risk communication and the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, has dubbed this disconnect the perception gap. “Even perfect information perfectly provided that addresses people’s concerns will not convince everyone that vaccines don’t cause autism, or that global warming is real, or that fluoride in the drinking water is not a Commie plot,” he says. “Risk communication can’t totally close the perception gap, the difference between our fears and the facts.”

In the early 1970s, psychologists Daniel Kahneman, now at Princeton University, and Amos Tversky, who passed away in 1996, began investigating the way people make decisions, identifying a number of biases and mental shortcuts, or heuristics, on which the brain relies to make choices. Later, Paul Slovic and his colleagues Baruch Fischhoff, now a professor of social sciences at Carnegie Mellon University, and psychologist Sarah Lichtenstein began investigating how these leaps of logic come into play when people face risk. They developed a tool, called the psychometric paradigm, that describes all the little tricks our brain uses when staring down a bear or deciding to finish the 18th hole in a lighting storm.

Many of our personal biases are unsurprising. For instance, the optimism bias gives us a rosier view of the future than current facts might suggest. We assume we will be richer 10 years from now, so it is fine to blow our savings on a boat—we’ll pay it off then. Confirmation bias leads us to prefer information that backs up our current opinions and feelings and to discount information contradictory to those opinions. We also have tendencies to conform our opinions to those of the groups we identify with, to fear man-made risks more than we fear natural ones, and to believe that events causing dread—the technical term for risks that could result in particularly painful or gruesome deaths, like plane crashes and radiation burns—are inherently more risky than other events.

But it is heuristics—the subtle mental strategies that often give rise to such biases—that do much of the heavy lifting in risk perception. The “availability” heuristic says that the easier a scenario is to conjure, the more common it must be. It is easy to imagine a tornado ripping through a house; that is a scene we see every spring on the news, and all the time on reality TV and in movies. Now try imagining someone dying of heart disease. You probably cannot conjure many breaking-news images for that one, and the drawn-out process of athero­sclerosis will most likely never be the subject of a summer thriller. The effect? Twisters feel like an immediate threat, although we have only a 1-in-46,000 chance of being killed by a cataclysmic storm. Even a terrible tornado season like the one last spring typically yields fewer than 500 tornado fatalities. Heart disease, on the other hand, which eventually kills 1 in every 6 people in this country, and 800,000 annually, hardly even rates with our gut. (…)

All the mental rules of thumb and biases banging around in our brain, the most influential in assessing risk is the “affect” heuristic. Slovic calls affect a “faint whisper of emotion” that creeps into our decisions. Simply put, positive feelings associated with a choice tend to make us think it has more benefits. Negative correlations make us think an action is riskier. One study by Slovic showed that when people decide to start smoking despite years of exposure to antismoking campaigns, they hardly ever think about the risks. Instead, it’s all about the short-term “hedonic” pleasure. The good outweighs the bad, which they never fully expect to experience.

Our fixation on illusory threats at the expense of real ones influences more than just our personal lifestyle choices. Public policy and mass action are also at stake. The Office of National Drug Control Policy reports that prescription drug overdoses have killed more people than crack and heroin combined did in the 1970s and 1980s. Law enforcement and the media were obsessed with crack, yet it was only recently that prescription drug abuse merited even an after-school special.

Despite the many obviously irrational ways we behave, social scientists have only just begun to systematically document and understand this central aspect of our nature. In the 1960s and 1970s, many still clung to the homo economicus model. They argued that releasing detailed information about nuclear power and pesticides would convince the public that these industries were safe. But the information drop was an epic backfire and helped spawn opposition groups that exist to this day. Part of the resistance stemmed from a reasonable mistrust of industry spin. Horrific incidents like those at Love Canal and Three Mile Island did not help. Yet one of the biggest obstacles was that industry tried to frame risk purely in terms of data, without addressing the fear that is an instinctual reaction to their technologies.

The strategy persists even today. In the aftermath of Japan’s nuclear crisis, many nuclear-energy boosters were quick to cite a study commissioned by the Boston-based nonprofit Clean Air Task Force. The study showed that pollution from coal plants is responsible for 13,000 premature deaths and 20,000 heart attacks in the United States each year, while nuclear power has never been implicated in a single death in this country. True as that may be, numbers alone cannot explain away the cold dread caused by the specter of radiation. Just think of all those alarming images of workers clad in radiation suits waving Geiger counters over the anxious citizens 
of Japan. Seaweed, anyone? (…)

All that media created a sort of feedback loop. Because people were seeing so many sharks on television and reading about them, the “availability” heuristic was screaming at them that sharks were an imminent threat.

“Certainly anytime we have a situation like that where there’s such overwhelming media attention, it’s going to leave a memory in the population,” says George Burgess, curator of the International Shark Attack File at the Florida Museum of Natural History, who fielded 30 to 40 media calls a day that summer. “Perception problems have always been there with sharks, and there’s a continued media interest in vilifying them. It makes a situation where the risk perceptions of the populace have to be continually worked on to break down stereotypes. Anytime there’s a big shark event, you take a couple steps backward, which requires scientists and conservationists to get the real word out.”

Then again, getting out the real word comes with its own risks—like the risk of getting the real word wrong. Misinformation is especially toxic to risk perception because it can reinforce generalized confirmation biases and erode public trust in scientific data. As scientists studying the societal impact of the Chernobyl meltdown have learned, doubt is difficult to undo. In 2006, 20 years after reactor number 4 at the Chernobyl nuclear power plant was encased in cement, the World Health Organization (WHO) and the International Atomic Energy Agency released a report compiled by a panel of 100 scientists on the long-term health effects of the level 7 nuclear disaster and future risks for those exposed. Among the 600,000 recovery workers and local residents who received a significant dose of radiation, the WHO estimates that up to 4,000 of them, or 0.7 percent, will develop a fatal cancer related to Chernobyl. For the 5 million people living in less contaminated areas of Ukraine, Russia, and Belarus, radiation from the meltdown is expected to increase cancer rates less than 1 percent. (…)

During the year following the September 11 attacks, millions of Americans opted out of air travel and slipped behind the wheel instead. While they crisscrossed the country, listening to breathless news coverage of anthrax attacks, extremists, and Homeland Security, they faced a much more concrete risk. All those extra cars on the road increased traffic fatalities by nearly 1,600. Airlines, on the other hand, recorded no fatalities.

It is unlikely that our intellect can ever paper over our gut reactions to risk. But a fuller understanding of the science is beginning to percolate into society. Earlier this year, David Ropeik and others hosted a conference on risk in Washington, D.C., bringing together scientists, policy makers, and others to discuss how risk perception and communication impact society. “Risk perception is not emotion and reason, or facts and feelings. It’s both, inescapably, down at the very wiring of our brain,” says Ropeik. “We can’t undo this. What I heard at that meeting was people beginning to accept this and to realize that society needs to think more holistically about what risk means.”

Ropeik says policy makers need to stop issuing reams of statistics and start making policies that manipulate our risk perception system instead of trying to reason with it. Cass Sunstein, a Harvard law professor who is now the administrator of the White House Office of Information and Regulatory Affairs, suggests a few ways to do this in his book Nudge: Improving Decisions About Health, Wealth, and Happiness, published in 2008. He points to the organ donor crisis in which thousands of people die each year because others are too fearful or uncertain to donate organs. People tend to believe that doctors won’t work as hard to save them, or that they won’t be able to have an open-
casket funeral (both false). And the gory mental images of organs being harvested from a body give a definite negative affect to the exchange. As a result, too few people focus on the lives that could be saved. Sunstein suggests—controversially—“mandated choice,” in which people must check “yes” or “no” to organ donation on their driver’s license application. Those with strong feelings can decline. Some lawmakers propose going one step further and presuming that people want to donate their organs unless they opt out.

In the end, Sunstein argues, by normalizing organ donation as a routine medical practice instead of a rare, important, and gruesome event, the policy would short-circuit our fear reactions and nudge us toward a positive societal goal. It is this type of policy that Ropeik is trying to get the administration to think about, and that is the next step in risk perception and risk communication. “Our risk perception is flawed enough to create harm,” he says, “but it’s something society can do something about.””

Jason Daley, What You Don’t Know Can Kill You, Discover Magazine, Oct 3, 2011. (Illustration: SteveCarroll, The Economist)

See also:

Daniel Kahneman on thinking ‘Fast And Slow’: How We Aren’t Made For Making Decisions
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Daniel Kahneman: How cognitive illusions blind us to reason, The Observer, 30 October 2011 
Daniel Kahneman on the riddle of experience vs. memory
The irrational mind - David Brooks on the role of emotions in politics, policy, and life